Scopedog Posted Saturday at 11:25 PM Posted Saturday at 11:25 PM (edited) The basic story of Macross Plus, when it was released 30 years ago, is that an artificial intelligence pop idol falls in love with a man, takes over an extremely dangerous military drone, which is defeated by a guy piloting a mind-controlled aircraft with highly advanced variable geometry. It sounds ridiculous but I feel we're not that far away from this scenario in real life. Artificial intelligence is advancing daily and AI pop stars already exist. China, and probably the USA, are working on variable geometry. Edited Saturday at 11:32 PM by Scopedog Quote
Seto Kaiba Posted Sunday at 04:17 AM Posted Sunday at 04:17 AM 3 hours ago, Scopedog said: It sounds ridiculous but I feel we're not that far away from this scenario in real life. Artificial intelligence is advancing daily and AI pop stars already exist. China, and probably the USA, are working on variable geometry. We are many decades, if not centuries, from having to worry about something like that. When people think about the term AI, what they're thinking about is more often than not is what's called Artificial General Intelligence. A computer that can think and reason like a human being. That technology is purely science fiction for a bunch of reasons. Mainly hardware and software limitations on the computer's side. The low end estimate of exactly how much computational power is trapped in the average human's noggin is about 1 exaflop. That's 1 times 10 to the 18th power floating point operations per second. And that's done on about 20 watts of power. Exaflop-scale supercomputers became a thing for the first time in 2022, but they're about a square kilometer in size, made up of around 5,000 separate high-end processors joined by hundreds of kilometers of cabling and coolant piping, draw over 30 million watts of power to operate (nuclear power station level energy demands), and still have all the limits of a machine processing linear operations in binary. They're not capable of fuzzy logic, abstract reasoning, or any of the other insane stuff that your squishy human brain does on a minute-by-minute basis. This is the kind of thing that might become possible when we have quantum supercomputers... but we're still trying to figure out how to reliably store single qbits. The AI technology that the news is fussing over is massively oversold. LLMs like Google's Gemini, Apple's Apple Intelligence, OpenAI's ChatGPT, xAI's Grok, etc. are nothing more than extremely inefficient upscalings of the same kind of text autocomplete in your phone's onscreen keyboard app. They possess no reasoning capability. All they're capable of doing is probability-based pattern-matching. Instead of just guessing the next word you might type based on probabilities from sample text, they're taking keywords and stringing together vast strings of text based purely on the probability of those words appearing in that order based on the gargantuan amount of raw text they've been fed from books and websites and so on. Which is why they "hallucinate". They have no capacity to actually understand the material you're exchanging with them. It's almost an "infinite monkeys" situation, with an extremely powerful server essentially guessing wildly based purely on next word probability until it comes up with a plausible sounding string of words that it vomits up. The art-based ones are no different. They break sample data down into mathematical models and then string those models together based on keywords from your prompt. Because their function is purely probability analysis-based, they can be "poisoned" with junk data that messes up those probability tables and makes them draw or talk even more nonsense than they normally do. Those "AI" pop stars are, variously, just people in mo-cap suits with autotune steering rigged 3D models like a Vtuber or a combination of existing text, speech synthesis, and video synthesis AI software that's just running preprogrammed and vetted prompts to avoid the system spazzing out. Will there be "AI"-powered drone weapons in the near future? Absolutely. Not weapons that can think for themselves, but weapons that use image recognition software to identify people or military vehicles connected to basic fire control systems. Something broadly analogous to the QF-2200 Ghost from Macross Zero, essentially. Something like the Ghost X-9, Sharon Apple, the Siren Delta System, Skynet, Commander Data, etc. is a sci-fi pipe dream with even the foreseeable future's technology. Quote
pengbuzz Posted Sunday at 05:05 AM Posted Sunday at 05:05 AM 47 minutes ago, Seto Kaiba said: We are many decades, if not centuries, from having to worry about something like that. When people think about the term AI, what they're thinking about is more often than not is what's called Artificial General Intelligence. A computer that can think and reason like a human being. That technology is purely science fiction for a bunch of reasons. Mainly hardware and software limitations on the computer's side. The low end estimate of exactly how much computational power is trapped in the average human's noggin is about 1 exaflop. That's 1 times 10 to the 18th power floating point operations per second. And that's done on about 20 watts of power. Exaflop-scale supercomputers became a thing for the first time in 2022, but they're about a square kilometer in size, made up of around 5,000 separate high-end processors joined by hundreds of kilometers of cabling and coolant piping, draw over 30 million watts of power to operate (nuclear power station level energy demands), and still have all the limits of a machine processing linear operations in binary. They're not capable of fuzzy logic, abstract reasoning, or any of the other insane stuff that your squishy human brain does on a minute-by-minute basis. This is the kind of thing that might become possible when we have quantum supercomputers... but we're still trying to figure out how to reliably store single qbits. The AI technology that the news is fussing over is massively oversold. LLMs like Google's Gemini, Apple's Apple Intelligence, OpenAI's ChatGPT, xAI's Grok, etc. are nothing more than extremely inefficient upscalings of the same kind of text autocomplete in your phone's onscreen keyboard app. They possess no reasoning capability. All they're capable of doing is probability-based pattern-matching. Instead of just guessing the next word you might type based on probabilities from sample text, they're taking keywords and stringing together vast strings of text based purely on the probability of those words appearing in that order based on the gargantuan amount of raw text they've been fed from books and websites and so on. Which is why they "hallucinate". They have no capacity to actually understand the material you're exchanging with them. It's almost an "infinite monkeys" situation, with an extremely powerful server essentially guessing wildly based purely on next word probability until it comes up with a plausible sounding string of words that it vomits up. The art-based ones are no different. They break sample data down into mathematical models and then string those models together based on keywords from your prompt. Because their function is purely probability analysis-based, they can be "poisoned" with junk data that messes up those probability tables and makes them draw or talk even more nonsense than they normally do. Those "AI" pop stars are, variously, just people in mo-cap suits with autotune steering rigged 3D models like a Vtuber or a combination of existing text, speech synthesis, and video synthesis AI software that's just running preprogrammed and vetted prompts to avoid the system spazzing out. Will there be "AI"-powered drone weapons in the near future? Absolutely. Not weapons that can think for themselves, but weapons that use image recognition software to identify people or military vehicles connected to basic fire control systems. Something broadly analogous to the QF-2200 Ghost from Macross Zero, essentially. Something like the Ghost X-9, Sharon Apple, the Siren Delta System, Skynet, Commander Data, etc. is a sci-fi pipe dream with even the foreseeable future's technology. I disagree: Spoiler Some folks out there don't even have 2 watts running their brains.... Quote
Big s Posted Sunday at 05:54 AM Posted Sunday at 05:54 AM 6 hours ago, Scopedog said: The basic story of Macross Plus, when it was released 30 years ago, is that an artificial intelligence pop idol falls in love with a man, takes over an extremely dangerous military drone, which is defeated by a guy piloting a mind-controlled aircraft with highly advanced variable geometry. It sounds ridiculous but I feel we're not that far away from this scenario in real life. Artificial intelligence is advancing daily and AI pop stars already exist. China, and probably the USA, are working on variable geometry. I don’t think it’s too far out. Some dude tried to marry a stupid hatsuna miku, and it really didn’t have an ai. AI as it stands is a long way from something that could run a military op, but we do have self driving vehicles and a lot of the tech is coming along fairly quickly. Might only be a few decades before an AI is flying a drone and able to pick targets on its own, maybe less. We already have pre programmed drones doing wild things at concerts. Might not be too far out for combat with all the conflicts popping up. Quote
Dynaman Posted Sunday at 05:28 PM Posted Sunday at 05:28 PM An AI controlled drone going "nuts" and needing to be shot down? IF someone is silly enough to hook up an autonomous drone with weaponry without an off switch then we could possibly have that today. An actual AI, nowhere near it. Quote
Seto Kaiba Posted Sunday at 07:29 PM Posted Sunday at 07:29 PM 12 hours ago, Big s said: [...] but we do have self driving vehicles and a lot of the tech is coming along fairly quickly. Eh... as someone who works on vehicular autonomy systems professionally, we emphatically do not have self-driving vehicles yet. It's actually a long way off, in terms of technological capability. Much as with every other "AI" technology, the main stumbling block is processing power. You need a very powerful computer to manage all the sensor and vehicle inputs needed to safely drive even at low speeds on city streets. So much so that the few experimental cars certified with SAE Lv4 limited/partial autonomy have computers so large they have to be mounted on the top of larger cars like minivans or SUVs. Those systems are only really capable of navigating a limited area on well-mapped city streets in good weather, and still require human intervention when they encounter an unsafe situation. Those computers draw so much power in normal operation that the cars have to be fitted with auxiliary power systems just for the computer and suffer reduced range from the extra weight and electrical demand. A true auotnomous vehicle would be SAE Level 5, which nobody has reached yet because it requires the car to be able to essentially function totally independently in any conditions and on any roads. A computer advanced enough to do this would be prohibitively large and heavy, and draw too much power to actually put on a car. About the best you can get in a commercially-available car is SAE Level 2 or Level 2+ autonomy, which is "Advanced Driver Assistance". Features like lane stay or adaptive cruise control. The car is not actually capable of driving itself. Tesla's "full self-driving" is actually a Level 2 system that is falsely advertised as autonomous, which is why Tesla's been sued many times for false advertising and wrongful death on the part of customers who believed their fraudulent claims and died as a result of their "autonomous" car carshing into stationary objects or other cars. (Their impressive demonstrations of autonomous capability were found to actually be staged with cars driven by remote control.) Quote
Scopedog Posted Sunday at 07:44 PM Author Posted Sunday at 07:44 PM (edited) It occurs to me, the anime doesn't make it quite clear whether Sharon Apple is operating the Ghost herself or if she just designated targets and it operated autonomously. I think she was probably operating it herself. She's shown to be able to multitask, like being a different version of herself when she seduces Yang at the end while controlling the SDF-1 and also bantering with Myung all at the same time. If she was just designating targets, it would have been the YF-19 so the Ghost had no reason to face off with the YF-21 (after 30 years, the YF-21 is still so ******** cool). Edited Sunday at 07:46 PM by Scopedog Quote
Seto Kaiba Posted Sunday at 08:36 PM Posted Sunday at 08:36 PM (edited) 56 minutes ago, Scopedog said: It occurs to me, the anime doesn't make it quite clear whether Sharon Apple is operating the Ghost herself or if she just designated targets and it operated autonomously. I think she was probably operating it herself. She's shown to be able to multitask, like being a different version of herself when she seduces Yang at the end while controlling the SDF-1 and also bantering with Myung all at the same time. If she was just designating targets, it would have been the YF-19 so the Ghost had no reason to face off with the YF-21 (after 30 years, the YF-21 is still so ******** cool). She's probably operating the Ghost herself. According to her Macross Chronicle Character Sheet, the Sharon-type AI was developed for the military by the Macross Concern's Palo Alto II Research Institute. It was designed to be a fleet supervisory support AI for use in emigrant fleets. Its job was twofold: to assist with managing stress among the populations of early emigrant ships (which were on the spartan side in terms of living conditions) with entertainment and subliminal audiovisual hypnosis where necessary, and to take over control of the fleet on its own should its human commanders be incapacitated during an emergency. The career of the virtuoid idol singer "Sharon Apple" was essentially a covert test of the incomplete Sharon-type AI's entertainment and population management systems disguised as a music company's avant garde tech demo. When Sharon Apple went crazy rampage nuts as a result of being rushed to completion with an illegal and dangerous bio-neural processor and having her emotion data sampled from a woman with more baggage than Delta Airlines, she used the command and control functions she was designed with to seize control of the Macross, the Ghost X-9, and all networked defenses on Earth. She wasn't able to break into the YF-21's systems the way she broke into the YF-19's because, as noted earlier in the OVA, half of the YF-21's computer is the pilot's brain. Edited Sunday at 08:41 PM by Seto Kaiba Quote
Big s Posted Sunday at 09:41 PM Posted Sunday at 09:41 PM 2 hours ago, Seto Kaiba said: You need a very powerful computer to manage all the sensor and vehicle inputs needed to safely drive even at low speeds on city streets. So much so that the few experimental cars certified with SAE Lv4 limited/partial autonomy have computers so large they have to be mounted on the top of larger cars like minivans or SUVs. It only took a few years to go from a computer the size of a room to be outshined by a personal computer and for that to be outshined by the smartphone. Depending on the necessity, technology moves fairly quickly and world conflicts supported by super powers tend to move technology faster than most catalysts Quote
Scopedog Posted Sunday at 10:06 PM Author Posted Sunday at 10:06 PM 1 hour ago, Seto Kaiba said: She's probably operating the Ghost herself. According to her Macross Chronicle Character Sheet, the Sharon-type AI was developed for the military by the Macross Concern's Palo Alto II Research Institute. It was designed to be a fleet supervisory support AI for use in emigrant fleets. Its job was twofold: to assist with managing stress among the populations of early emigrant ships (which were on the spartan side in terms of living conditions) with entertainment and subliminal audiovisual hypnosis where necessary, and to take over control of the fleet on its own should its human commanders be incapacitated during an emergency. The career of the virtuoid idol singer "Sharon Apple" was essentially a covert test of the incomplete Sharon-type AI's entertainment and population management systems disguised as a music company's avant garde tech demo. I didn't know about this background info. I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology. Quote
Dynaman Posted Sunday at 11:23 PM Posted Sunday at 11:23 PM 1 hour ago, Scopedog said: I didn't know about this background info. I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology. You should worry more about some corporation doing so. A government will just come in and hook it up to weaponry after the fact. Quote
azrael Posted Sunday at 11:34 PM Posted Sunday at 11:34 PM 46 minutes ago, Scopedog said: I didn't know about this background info. I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology. We're not saying they aren't. But the scale of which to achieve AGI into the package of a credit card-size SOC will not be achieved in our lifetime. The AI in our world today is nowhere near Hollywood AI. Very far from it. The computer currently in robotaxis are scaled down server hardware; just enough to run the necessary applications and process the sensor data in realtime without consuming a ridiculous amount of power. The rest gets offloaded to the cloud. 1 hour ago, Big s said: It only took a few years to go from a computer the size of a room to be outshined by a personal computer and for that to be outshined by the smartphone. Depending on the necessity, technology moves fairly quickly and world conflicts supported by super powers tend to move technology faster than most catalysts Actually, it took a couple of decades. It moves fast, but not THAT fast. Economy of scale also plays into this. And unfortunately, we're going right back to room-sized computers (computer clusters that is) because personal computers do not have the computing horsepower to process models at a level to make it economical nor do most homes have the electrical power to keep it running. AI model processing is a power consuming task and that been a major hurdle keeping it in datacenters. LLMs that run on our PCs usually run on smaller data sets and require more time to process that data versus a cluster in a data center processing large datasets at record pace. Quote
Seto Kaiba Posted Sunday at 11:42 PM Posted Sunday at 11:42 PM 1 hour ago, Big s said: It only took a few years to go from a computer the size of a room to be outshined by a personal computer and for that to be outshined by the smartphone. Depending on the necessity, technology moves fairly quickly and world conflicts supported by super powers tend to move technology faster than most catalysts Eh... while is partially correct, the actual amount of time it took is quite a bit longer than just "a few years" and owes a lot to the switch from vacuum tubes to transistors to your modern integrated circuits. The pace of advancement has also slowed down quite a bit in recent years because we have effectively hit the limits of what we can reasonably do with silicon in terms of improving packaging density and clock rate. (That's actually why high-end chips like Intel's 13th and 14th gen have been burning out. The push for ever-faster clock rates while nearing the limits of silicon's performance led to simply overclocking the chips until they started burning up.) 1 hour ago, Scopedog said: I didn't know about this background info. I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology. Some of it is mentioned in passing in Macross Plus... the Macross Concern is also the party who provided the bio-neural chip to the Venus Sound Factory team working on Sharon, and the same group who also developed the Ghost X-9 around the same AI technology. The project's goal was to produce a next-generation unmanned fighter that could operate more flexibly on the battlefield and exhibit humanlike levels of unpredictability in combat maneuvers. That same research is still ongoing in Macross Frontier's drama CDs, with LAI working on a next-generation Ghost that complies with the post-Sharon Apple Incident regulations on AI but can nevertheless still exhibit humanlike responses due to personality modeling AI. (Luca's questionable judgement led him to model the prototypes on his crush and his two best friends from school.) If the government were working on something like Sharon, we would know about it. That kind of development involves hundreds of thousands of people and billions if not trillions of dollars in investment... and the government is absolute rubbish at keeping secrets at the best of times. These are NOT the best of times when it comes to secrecy. 🤣 We know that what they are doing with AI is trying to make unmanned wingmen for manned 5th and 6th Generation fighters like what we see with Luca's Ghosts in Macross Frontier and the Lilldrakens and Super Ghosts in Macross Delta. It's not going great, but it could be going a lot worse. They're kind of at the "well at least it's not cartwheeling across the sky like a SpaceX rocket" phase. (I can only assume conspiracy theorists are kids who never had to do group projects in school... and therefore have a very exaggerated and beautifully optimistic belief in how well people work together in groups. 🤣) Quote
Scopedog Posted Monday at 01:05 AM Author Posted Monday at 01:05 AM 1 hour ago, Seto Kaiba said: Eh... while is partially correct, the actual amount of time it took is quite a bit longer than just "a few years" and owes a lot to the switch from vacuum tubes to transistors to your modern integrated circuits. The pace of advancement has also slowed down quite a bit in recent years because we have effectively hit the limits of what we can reasonably do with silicon in terms of improving packaging density and clock rate. (That's actually why high-end chips like Intel's 13th and 14th gen have been burning out. The push for ever-faster clock rates while nearing the limits of silicon's performance led to simply overclocking the chips until they started burning up.) Some of it is mentioned in passing in Macross Plus... the Macross Concern is also the party who provided the bio-neural chip to the Venus Sound Factory team working on Sharon, and the same group who also developed the Ghost X-9 around the same AI technology. The project's goal was to produce a next-generation unmanned fighter that could operate more flexibly on the battlefield and exhibit humanlike levels of unpredictability in combat maneuvers. That same research is still ongoing in Macross Frontier's drama CDs, with LAI working on a next-generation Ghost that complies with the post-Sharon Apple Incident regulations on AI but can nevertheless still exhibit humanlike responses due to personality modeling AI. (Luca's questionable judgement led him to model the prototypes on his crush and his two best friends from school.) If the government were working on something like Sharon, we would know about it. That kind of development involves hundreds of thousands of people and billions if not trillions of dollars in investment... and the government is absolute rubbish at keeping secrets at the best of times. These are NOT the best of times when it comes to secrecy. 🤣 We know that what they are doing with AI is trying to make unmanned wingmen for manned 5th and 6th Generation fighters like what we see with Luca's Ghosts in Macross Frontier and the Lilldrakens and Super Ghosts in Macross Delta. It's not going great, but it could be going a lot worse. They're kind of at the "well at least it's not cartwheeling across the sky like a SpaceX rocket" phase. (I can only assume conspiracy theorists are kids who never had to do group projects in school... and therefore have a very exaggerated and beautifully optimistic belief in how well people work together in groups. 🤣) I really appreciate your responses. What do you think about mind-controlled aircraft with advanced variable geometry (not swing-wing but actual shape-shifting materials, which the Chinese have supposedly accomplished)? Quote
Big s Posted Monday at 02:35 AM Posted Monday at 02:35 AM 1 hour ago, Scopedog said: I really appreciate your responses. What do you think about mind-controlled aircraft with advanced variable geometry (not swing-wing but actual shape-shifting materials, which the Chinese have supposedly accomplished)? There are already a lot of experiments with controlling aircraft without using arm and leg movements. Mostly eyes and temple attachments, but brainwave control might not be too far off either since they’re also doing experiments with brain chips, oddly a lot in paralysis treatment. Controlling simple functions like flaps and such might actually be easier than the stuff trying to get the upper spine to control lower leg nerves. as far as variable shape materials, they do exist, but I think it’s a long way from doing something as complex as the wings on the YF-21. But I will say that things are further along than I thought they’d be a couple decades ago. I just really haven’t seen many practical uses for these materials due to issues with durability and trying to get the materials to do more complex movements. Most at the moment are memory type materials that usually just flex from one shape back to the original form. Quote
Seto Kaiba Posted Monday at 02:35 AM Posted Monday at 02:35 AM (edited) 1 hour ago, Scopedog said: I really appreciate your responses. What do you think about mind-controlled aircraft with advanced variable geometry (not swing-wing but actual shape-shifting materials, which the Chinese have supposedly accomplished)? Rudimentary flight control using dry electroencephalographic sensors is technically quite possible. There were a number of gimmicky children's toys based on the idea of using an EEG headset to control a simple motorized toy back in the 2000s.. Star Wars even got in on it in 2009 with a toy called the force trainer, which used an EEG headset to control the PWM of a fan that would levitate a plastic ball. The fad didn't last very long, in part due to those very basic sensors not being capable of complex control, but it's proof of concept at the very least. As in Macross, it would be an absolutely terrible way to try to control an aircraft. The YF-21 brain direct interface had realistically unforgiving design tolerances when it came to keeping the sensors aligned with the pilot's head. Even a few millimeters of slip in that sensor hood was enough to greatly reduce the system's accuracy. As such, the YF-21 ended up needing a pilot seat that almost totally immobilized the pilot to prevent that sensor hood from shifting. (This is why, in real EEG testing, they stick the sensors directly to your scalp with gel and even that encourage you not to move.) Supplemental technical publications also suggest that, in a realistic turn, the BDI system would also need hundreds of hours of training and data collection in order to build up a translation database to allow it to convert the pilot's brainwave data into usable machine instructions. Even then, a sharp shock or strong emotion may result in a loss of control over the system due to creating noise in the recorded brainwave, much like we see happen in the OVA. The system was ultimately much too finicky and unreliable to be practical in combat and was scrapped. Attitude control via wing warping is technology that goes all the way back to the earliest powered aircraft. The modern version of the concept is called the adaptive compliant wing. It's something the US was testing back in the mid '80s. Testing using a modified F-111 revealed that the concept has durability issues, and it is rather more expensive than conventional wing surfaces. Flaws that are echoed in the Macross universe's YF-21. There is an EU funded research group called flexop which is currently looking at ways to apply the technology to jet airliners as a way to save fuel through drag reduction. Edited Monday at 02:37 AM by Seto Kaiba Quote
Big s Posted Monday at 03:35 AM Posted Monday at 03:35 AM 12 minutes ago, Scopedog said: Wow, this is great feedback. I'm going a bit off-topic now but what do you think about the Macross II bits system (or whatever they were called, I know it's an old idea in video games). How close, or far, are we from having autonomous drones accompanying aircraft? Maybe not too far, but it’s one of those things where the practice isn’t there. That may change though, but normally if we need the firepower of a manned aircraft, then we’d send a manned aircraft. With certain conflicts in other areas, it may be more practical to send an aircraft with other aircraft as escort, but the more things change, then it may end up with more complex dogfighting that might benefit from a launched drone to help out. But I still think that’s not something we’d see with the types of conflicts we’re seeing at the moment Quote
pengbuzz Posted Monday at 12:16 PM Posted Monday at 12:16 PM (edited) 16 hours ago, Seto Kaiba said: Eh... as someone who works on vehicular autonomy systems professionally, we emphatically do not have self-driving vehicles yet. It's actually a long way off, in terms of technological capability. Much as with every other "AI" technology, the main stumbling block is processing power. You need a very powerful computer to manage all the sensor and vehicle inputs needed to safely drive even at low speeds on city streets. So much so that the few experimental cars certified with SAE Lv4 limited/partial autonomy have computers so large they have to be mounted on the top of larger cars like minivans or SUVs. Those systems are only really capable of navigating a limited area on well-mapped city streets in good weather, and still require human intervention when they encounter an unsafe situation. Those computers draw so much power in normal operation that the cars have to be fitted with auxiliary power systems just for the computer and suffer reduced range from the extra weight and electrical demand. A true auotnomous vehicle would be SAE Level 5, which nobody has reached yet because it requires the car to be able to essentially function totally independently in any conditions and on any roads. A computer advanced enough to do this would be prohibitively large and heavy, and draw too much power to actually put on a car. About the best you can get in a commercially-available car is SAE Level 2 or Level 2+ autonomy, which is "Advanced Driver Assistance". Features like lane stay or adaptive cruise control. The car is not actually capable of driving itself. Tesla's "full self-driving" is actually a Level 2 system that is falsely advertised as autonomous, which is why Tesla's been sued many times for false advertising and wrongful death on the part of customers who believed their fraudulent claims and died as a result of their "autonomous" car carshing into stationary objects or other cars. (Their impressive demonstrations of autonomous capability were found to actually be staged with cars driven by remote control.) Could such a computer be land-based and have telemetry sent to it/ commands sent back via cell/ radio? I know the issues with that (loss of signal/ signal degradation, hacking vulnerability, data corruption, signal crossover, signal jamming. blocking, spots on your dishes in the dishwasher, etc.); I just thought that might be another approach... Edited Monday at 12:24 PM by pengbuzz Quote
Big s Posted Monday at 02:58 PM Posted Monday at 02:58 PM 1 hour ago, pengbuzz said: Could such a computer be land-based and have telemetry sent to it/ commands sent back via cell/ radio? I know the issues with that (loss of signal/ signal degradation, hacking vulnerability, data corruption, signal crossover, signal jamming. blocking, spots on your dishes in the dishwasher, etc.); I just thought that might be another approach... I’ve seen Waymo’s cars getting stuck just going around in parking lots, so they tend to get confused even with the onboard hardware. I’d imagine that having everything off the vehicle might even be worse, but I’m nowhere near being an expert in these. But it was a hilarious video on YouTube with the guy trying to get to an airport and the car kept driving him in circles through a random lot while he was in the back trying to call tech support for help. Quote
Seto Kaiba Posted Monday at 03:49 PM Posted Monday at 03:49 PM (edited) 3 hours ago, pengbuzz said: Could such a computer be land-based and have telemetry sent to it/ commands sent back via cell/ radio? I know the issues with that (loss of signal/ signal degradation, hacking vulnerability, data corruption, signal crossover, signal jamming. blocking, spots on your dishes in the dishwasher, etc.); I just thought that might be another approach... It's possible in theory, but you absolutely wouldn't want to attempt it in practice. Even in perfect conditions, the additional latency involved in offboard control would be a major safety risk. You typically have between 0.7 and 1.5 seconds to react in the event of a potential collision. Cellular data networks are not the fastest, and you'll typically see a ping of around 50-500ms depending on how good your signal is, how contested the local network is, and what network you're on. The car would not be able to send sensor data to the cloud and get a reaction back fast enough to avoid collisions in a lot of cases. Toss in issues with network disruptions due to weather, distance to the nearest tower, noise, jamming, etc. and it becomes a nightmare scenario. Local control is much faster and more reliable, with an end-to-end communication and reaction time faster than what humans are capable of in most circumstances. 51 minutes ago, Big s said: I’ve seen Waymo’s cars getting stuck just going around in parking lots, so they tend to get confused even with the onboard hardware. I’d imagine that having everything off the vehicle might even be worse, but I’m nowhere near being an expert in these. But it was a hilarious video on YouTube with the guy trying to get to an airport and the car kept driving him in circles through a random lot while he was in the back trying to call tech support for help. That's one of the problems with limited onboard computer hardware... even with advanced radar, optical cameras, and LIDAR, autonomous vehicles can end up stuck when having to deal with flesh and blood drivers that dont' follow the rules of the road as rigidly as the machines do. It's particularly bad when there are unclear markings on the road, or road markings just aren't visible due to weather or wear and tear, since they depend on those to orient themselves. There's a prank that's sometimes pulled by drawing a do-not-cross double line in salt or spray paint around an autonomous vehicle, trapping it with its own refusal to disobey traffic laws... and this is some of the most advanced autonomous AI we have. We're a long, LONG way from something like Sharon Apple. Teslas are even more prone to such issues since they lack LIDAR arrays and try to get by purely with ultrasonics, radar, and optical cameras. They often completely miss signage, fail to identify obstacles, and run into stationary objects they failed to see when visibility's poor or their camera lenses get dirty. Edited Monday at 03:50 PM by Seto Kaiba Quote
Seto Kaiba Posted Monday at 04:17 PM Posted Monday at 04:17 PM WRT the previous and how it applies to drone aircraft. Remotely operated or semiautonomous aircraft like the MQ-9 typically operate at altitudes and in areas where there are few to no collision risks. These more forgiving conditions allow drones to adopt default behaviors like autonomously circling over an area or returning to base in the event that the control signal is lost. They're typically low enough that a possibility of collision with another aircraft is minimal outside of intentional attempts to collide, but high enough that terrain, buildings, and foliage pose no risk either. Being able to maneuver in three dimensions to avoid any possible collisions also makes matters far more forgiving. Using dedicated base stations and military satellite networks helps the network congestion and latency issues too. Quote
pengbuzz Posted 21 hours ago Posted 21 hours ago (edited) 13 hours ago, Seto Kaiba said: It's possible in theory, but you absolutely wouldn't want to attempt it in practice. Even in perfect conditions, the additional latency involved in offboard control would be a major safety risk. You typically have between 0.7 and 1.5 seconds to react in the event of a potential collision. Cellular data networks are not the fastest, and you'll typically see a ping of around 50-500ms depending on how good your signal is, how contested the local network is, and what network you're on. The car would not be able to send sensor data to the cloud and get a reaction back fast enough to avoid collisions in a lot of cases. Toss in issues with network disruptions due to weather, distance to the nearest tower, noise, jamming, etc. and it becomes a nightmare scenario. Local control is much faster and more reliable, with an end-to-end communication and reaction time faster than what humans are capable of in most circumstances. Thanks for the clarification and additional info; I knew there had to be more to it than just what I was thinking, otherwise they'd have done it already. On 6/1/2025 at 12:17 AM, Seto Kaiba said: When people think about the term AI, what they're thinking about is more often than not is what's called Artificial General Intelligence. A computer that can think and reason like a human being. That technology is purely science fiction for a bunch of reasons. Mainly hardware and software limitations on the computer's side. The low end estimate of exactly how much computational power is trapped in the average human's noggin is about 1 exaflop. That's 1 times 10 to the 18th power floating point operations per second. And that's done on about 20 watts of power. Exaflop-scale supercomputers became a thing for the first time in 2022, but they're about a square kilometer in size, made up of around 5,000 separate high-end processors joined by hundreds of kilometers of cabling and coolant piping, draw over 30 million watts of power to operate (nuclear power station level energy demands), and still have all the limits of a machine processing linear operations in binary. They're not capable of fuzzy logic, abstract reasoning, or any of the other insane stuff that your squishy human brain does on a minute-by-minute basis. This is the kind of thing that might become possible when we have quantum supercomputers... but we're still trying to figure out how to reliably store single qbits. The AI technology that the news is fussing over is massively oversold. LLMs like Google's Gemini, Apple's Apple Intelligence, OpenAI's ChatGPT, xAI's Grok, etc. are nothing more than extremely inefficient upscalings of the same kind of text autocomplete in your phone's onscreen keyboard app. They possess no reasoning capability. All they're capable of doing is probability-based pattern-matching. Instead of just guessing the next word you might type based on probabilities from sample text, they're taking keywords and stringing together vast strings of text based purely on the probability of those words appearing in that order based on the gargantuan amount of raw text they've been fed from books and websites and so on. Which is why they "hallucinate". They have no capacity to actually understand the material you're exchanging with them. It's almost an "infinite monkeys" situation, with an extremely powerful server essentially guessing wildly based purely on next word probability until it comes up with a plausible sounding string of words that it vomits up. The art-based ones are no different. They break sample data down into mathematical models and then string those models together based on keywords from your prompt. Because their function is purely probability analysis-based, they can be "poisoned" with junk data that messes up those probability tables and makes them draw or talk even more nonsense than they normally do. Those "AI" pop stars are, variously, just people in mo-cap suits with autotune steering rigged 3D models like a Vtuber or a combination of existing text, speech synthesis, and video synthesis AI software that's just running preprogrammed and vetted prompts to avoid the system spazzing out. Will there be "AI"-powered drone weapons in the near future? Absolutely. Not weapons that can think for themselves, but weapons that use image recognition software to identify people or military vehicles connected to basic fire control systems. Something broadly analogous to the QF-2200 Ghost from Macross Zero, essentially. Something like the Ghost X-9, Sharon Apple, the Siren Delta System, Skynet, Commander Data, etc. is a sci-fi pipe dream with even the foreseeable future's technology. So basically, what the companies were showing us with "self driving cars" is mostly just smoke and mirrors to impress the public into thinking they're more advanced then they are, right? I would also assume it's to garner more investors and money from the government for research and development, but you know more than I do since you work in the field. Edited 21 hours ago by pengbuzz Quote
azrael Posted 20 hours ago Posted 20 hours ago 53 minutes ago, pengbuzz said: So basically, what the companies were showing us with "self driving cars" is mostly just smoke and mirrors to impress the public into thinking they're more advanced then they are, right? I would also assume it's to garner more investors and money from the government for research and development, but you know more than I do since you work in the field. More like taking advantage of people's lack of understanding. There isn't a terrible amount of "learning" or "intelligence" in self-driving cars. You drive? Driving is based of the assumption that you follow the rules of the road, speed limits, right of way, etc. Those are all rules and you are an agent acting in that system of rules assuming everyone else also follows those rules. When you change lanes, you follow a set of actions based on those rules. Is the lane you are changing into directly clear? If yes, you adjust the wheels so that you start moving into the next lane until you reach the desired lane. If the lane next to you is not clear, you either speed up or slow down until the space next to you is clear and then you move into the next lane. This is essentially a decision tree. All of the sensors (cameras, LiDAR, radar, etc) in self-driving cars are constantly measuring speed of itself and everyone around you, tracking distance to and from cars around you, tracking foreign objects on the road (like people in the crosswalk, etc. all feeding into the car's onboard computer. Guess what your eyes, ears and brain are doing? Same thing. Your eyes see depth and can track objects. You are constantly scanning the road in front of and ahead of you. You use mirrors to track objects around you. All of this information is relayed to your brain, i.e. the computer, and calculates the relative distance to objects around you. You look at the speedometer to get your speed. If the cars around you stay at the same distance from you according to your speed, that means they are going at the same speed. Using all this information, you make decisions on how you drive according to what's happening around you. Self-driving cars are doing the exact same thing. Some do it relatively well. Some do it very poorly. Quote
pengbuzz Posted 19 hours ago Posted 19 hours ago (edited) 1 hour ago, azrael said: More like taking advantage of people's lack of understanding. There isn't a terrible amount of "learning" or "intelligence" in self-driving cars. You drive? Driving is based of the assumption that you follow the rules of the road, speed limits, right of way, etc. Those are all rules and you are an agent acting in that system of rules assuming everyone else also follows those rules. When you change lanes, you follow a set of actions based on those rules. Is the lane you are changing into directly clear? If yes, you adjust the wheels so that you start moving into the next lane until you reach the desired lane. If the lane next to you is not clear, you either speed up or slow down until the space next to you is clear and then you move into the next lane. This is essentially a decision tree. All of the sensors (cameras, LiDAR, radar, etc) in self-driving cars are constantly measuring speed of itself and everyone around you, tracking distance to and from cars around you, tracking foreign objects on the road (like people in the crosswalk, etc. all feeding into the car's onboard computer. Guess what your eyes, ears and brain are doing? Same thing. Your eyes see depth and can track objects. You are constantly scanning the road in front of and ahead of you. You use mirrors to track objects around you. All of this information is relayed to your brain, i.e. the computer, and calculates the relative distance to objects around you. You look at the speedometer to get your speed. If the cars around you stay at the same distance from you according to your speed, that means they are going at the same speed. Using all this information, you make decisions on how you drive according to what's happening around you. Self-driving cars are doing the exact same thing. Some do it relatively well. Some do it very poorly. Actually, I don't drive at all; I suffered a traumatic brain injury (TBI) in 2007 that affects my visual perception, motor reflexes, judgment and coordination so badly, that I can never sit behind the wheel of a car again. Add to that seizures (tonic/clonic aka "grand mal") and emotional instability due to damage to the Amygdala (center that controls fear, anger and anxiety), and I'm the last person in the world you would ever want driving. It sounds like these "self-driving cars" would drive worse than I would now! O.O Edited 19 hours ago by pengbuzz Quote
Seto Kaiba Posted 10 hours ago Posted 10 hours ago (edited) 11 hours ago, pengbuzz said: Thanks for the clarification and additional info; I knew there had to be more to it than just what I was thinking, otherwise they'd have done it already. There are some features on some cars (mainly EVs) where it is possible/practical or even advantageous to move computation out into the cloud... but those are mostly things that go on while the vehicle is stationary and thus are not critically time-sensitive like rate-conscious "smart grid" charging of EVs. (A project I worked on with the Argonne National Lab back in the day.) 11 hours ago, pengbuzz said: So basically, what the companies were showing us with "self driving cars" is mostly just smoke and mirrors to impress the public into thinking they're more advanced then they are, right? I would also assume it's to garner more investors and money from the government for research and development, but you know more than I do since you work in the field. To be fair, most OEMs have been reasonably upfront about the actual capabilities of their ADAS and related autonomous vehicle features. The commercially available autonomy features (mainly ADAS ones like adaptive cruise control and smart lane stay) work well and are exhaustively tested for safety. They're just not self-driving. Likewise, robotaxi companies like Waymo (who I've worked with directly), Baidu, and Pony.ai are quite open about the fact that their robotaxi services are essentially an open beta of an evolving technology that's "good enough" to be safely tested by the public under controlled conditions but isn't ready for widespread consumer adoption yet. The sort of "AI" technology we'd need to create a car that can drive anywhere as flexibly and safely as a human driver doesn't exist yet. It's a hugely complex undertaking to create a system that can respond to road conditions and hazards as flexibly as a living driver and be able to drive anywhere the way a living driver could. There are still a lot of unresolved/unsolvable questions that need to be figured out for us to create a self-driving program that can autonomously operate on any public road, never mind one that can operate anywhere as the goal of a true Level 5 autonomous vehicle can... never mind doing so on hardware compact and efficient enough to fit inside of a car and cheap enough to actually include on a mass production basis. People don't appreciate just how much of the heavy lifting in driving a car is done by the brain meat of the human in the front seat. To really pull it off a computer needs to not only be able to follow the rules of the road and make snap decisions about safety to prevent collisions and such, it also needs to be able to understand what terrain is safe to drive on, to identify unsafe conditions on the road and respond accordingly, and make snap decisions based on emerging safety risks as well. That level of tech just ain't here yet. A lot of misconceptions about the completeness and road-readiness of "Self driving" cars comes from one bad actors: Tesla Motors. Mainly their dippy CEO, who keeps pushing the claim that Tesla will have full Level 5 autonomy figured out "in the next few years" as a way to sell idiots on an overpriced self-driving "beta" (actually just their Level 2 system) with the vague promise of a future update to unlock true self-driving capability coming "when it's done". Like most of the company's claims, it's hogwash and allegedly (based on court filings) done with intent to defraud. Edited 10 hours ago by Seto Kaiba Quote
pengbuzz Posted 9 hours ago Posted 9 hours ago 38 minutes ago, Seto Kaiba said: There are some features on some cars (mainly EVs) where it is possible/practical or even advantageous to move computation out into the cloud... but those are mostly things that go on while the vehicle is stationary and thus are not critically time-sensitive like rate-conscious "smart grid" charging of EVs. (A project I worked on with the Argonne National Lab back in the day.) To be fair, most OEMs have been reasonably upfront about the actual capabilities of their ADAS and related autonomous vehicle features. The commercially available autonomy features (mainly ADAS ones like adaptive cruise control and smart lane stay) work well and are exhaustively tested for safety. They're just not self-driving. Likewise, robotaxi companies like Waymo (who I've worked with directly), Baidu, and Pony.ai are quite open about the fact that their robotaxi services are essentially an open beta of an evolving technology that's "good enough" to be safely tested by the public under controlled conditions but isn't ready for widespread consumer adoption yet. The sort of "AI" technology we'd need to create a car that can drive anywhere as flexibly and safely as a human driver doesn't exist yet. It's a hugely complex undertaking to create a system that can respond to road conditions and hazards as flexibly as a living driver and be able to drive anywhere the way a living driver could. There are still a lot of unresolved/unsolvable questions that need to be figured out for us to create a self-driving program that can autonomously operate on any public road, never mind one that can operate anywhere as the goal of a true Level 5 autonomous vehicle can... never mind doing so on hardware compact and efficient enough to fit inside of a car and cheap enough to actually include on a mass production basis. People don't appreciate just how much of the heavy lifting in driving a car is done by the brain meat of the human in the front seat. To really pull it off a computer needs to not only be able to follow the rules of the road and make snap decisions about safety to prevent collisions and such, it also needs to be able to understand what terrain is safe to drive on, to identify unsafe conditions on the road and respond accordingly, and make snap decisions based on emerging safety risks as well. That level of tech just ain't here yet. A lot of misconceptions about the completeness and road-readiness of "Self driving" cars comes from one bad actors: Tesla Motors. Mainly their dippy CEO, who keeps pushing the claim that Tesla will have full Level 5 autonomy figured out "in the next few years" as a way to sell idiots on an overpriced self-driving "beta" (actually just their Level 2 system) with the vague promise of a future update to unlock true self-driving capability coming "when it's done". Like most of the company's claims, it's hogwash and allegedly (based on court filings) done with intent to defraud. Okay, my apologies then. I thought the plethora of them were doing this and not just Tesla. Apparently, given the info you gave me just now, we can tell where the "flow of the BS" is coming from. TBH: it's not really a surprise given he 3d prints rocket engine parts (I don't trust the sintering process for the metals used in them). Thanks again for the clarification; apparently, my info is half-baked at points (and needs some cinnamon and brown sugar plus icing!). Quote
Master Dex Posted 9 hours ago Posted 9 hours ago 3 minutes ago, pengbuzz said: Okay, my apologies then. I thought the plethora of them were doing this and not just Tesla. Apparently, given the info you gave me just now, we can tell where the "flow of the BS" is coming from. TBH: it's not really a surprise given he 3d prints rocket engine parts (I don't trust the sintering process for the metals used in them). Relatively Space does more of 3D printed rocket components than any other space launch company, though even they are walking it back on new designs for cost reasons. SpaceX has done it but not to a large scale. And in all truth SpaceX isn't even directly operated to that guy's direction, it's run by Gwynne Shotwell who does a very thankless job considering all the things they accomplish (and the press they have to deal with for iterative design testing being inherently destructive but the layman just seeing it as failure). I'm all for dunking on Musk but he takes too much credit (good and bad) for a lot of other people's work. He is more directly responsible at Tesla though which causes most of the aforementioned BS trending. Quote
Seto Kaiba Posted 8 hours ago Posted 8 hours ago 56 minutes ago, pengbuzz said: Okay, my apologies then. I thought the plethora of them were doing this and not just Tesla. Apparently, given the info you gave me just now, we can tell where the "flow of the BS" is coming from. Misinformation and exaggeration plays a big role in public perception of AI as being a lot more capable, powerful, efficient, and scalable than it actually is. It makes for a fantastic example of the true, rather embarrassingly primitive, state of AI technology though. Not only is an Artificial General Intelligence like Sharon Apple, Skynet, Commander Data, or any other sci-fi AI not "just a few years away" like the toxic techbros like to claim, the AI technology we actually have amounts to a coked up and hilariously inefficient techno-parrot blindly repeating strings of words it's heard frequently with no understanding of what they mean, robot cars that can barely manage to drive certain ringfenced city streets in carefully controlled conditions, and slightly better autotune. Of course, it probably also does not help that we keep societally moving the bar on what constitutes "AI". A lot of convenience features that use machine learning and so on like autocorrect, auto-focus, automatic red eye correction, etc. were once considered AI, but aren't really though about as such because they've become mundane and people have realized they're not actually "intelligent". Quote
Big s Posted 7 hours ago Posted 7 hours ago (edited) 1 hour ago, Master Dex said: Relatively Space does more of 3D printed rocket components than any other space launch company, though even they are walking it back on new designs for cost reasons. SpaceX has done it but not to a large scale. And in all truth SpaceX isn't even directly operated to that guy's direction, it's run by Gwynne Shotwell who does a very thankless job considering all the things they accomplish (and the press they have to deal with for iterative design testing being inherently destructive but the layman just seeing it as failure). I'm all for dunking on Musk but he takes too much credit (good and bad) for a lot of other people's work. He is more directly responsible at Tesla though which causes most of the aforementioned BS trending. One thing I gotta give credit to Space-X is that now if I break something or blow something by mistake, I can use their ultimate oops description “rapid unscheduled disassembly”. It’s their ultimate accomplishment to humanity. Dropped a fancy plate on the ground, I didn’t break it. It was a “rapid unscheduled disassembly” I didn’t accidentally break the window. It was a “rapid unscheduled disassembly”. I didn’t total the car. It was a “rapid unscheduled disassembly”. I didn’t accidentally kill the neighbor in a drunken fight and dismember the body to make it easier to dispose of the evidence. He had a “rapid unscheduled disassembly”. That line works for every situation, kinda like a new version of the Mentos commercials Edited 7 hours ago by Big s Quote
Seto Kaiba Posted 7 hours ago Posted 7 hours ago 9 minutes ago, Big s said: “rapid unscheduled disassembly" That's actually borrowed from NASA. They have a list of lovely euphemisms for when things go horribly wrong on rockets. I think my favorite is either "engine-rich exhaust" (part of the engine melted and/or was ejected from the craft unintentionally) or "lithobraking maneuver" (it hit the ground). Quote
Master Dex Posted 4 hours ago Posted 4 hours ago Yeah that's all decades old spaceflight jargon and dark humor that came from old pilots and the steely-eyed missile men at Mission Control. SpaceX is just following up their legacy. Great work but it wouldn't be without what the vintage NASA did first. Quote
azrael Posted 4 hours ago Posted 4 hours ago 3 hours ago, pengbuzz said: Okay, my apologies then. I thought the plethora of them were doing this and not just Tesla. Apparently, given the info you gave me just now, we can tell where the "flow of the BS" is coming from. Unfortunately, "AI" has become a buzzword. Just like "blockchain", "the cloud", and so many more words before it. There's a commercial out there about a data analytics software where they threw out "We use AI to blah blah your data blah blah...". No, you're just using ChatGPT to perform data analysis instead of paying 75-90k for a flesh body running Tableau. Sure, we may see AGI, in some stupidly basic form, in our lifetime. But it will need every node in a 800,000+ sq ft data center, powered by a small nuclear reactor to accomplish that. 'Would makes for a good headline, but completely impractical. Quote
Big s Posted 1 hour ago Posted 1 hour ago 3 hours ago, azrael said: Sure, we may see AGI, in some stupidly basic form, in our lifetime. But it will need every node in a 800,000+ sq ft data center, powered by a small nuclear reactor to accomplish that. 'Would makes for a good headline, but completely impractical. Either that or someone will do some unethical thing by cloning a brain and hooking it up to a suv Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.