Jump to content

Recommended Posts

Posted (edited)

The basic story of Macross Plus, when it was released 30 years ago, is that an artificial intelligence pop idol falls in love with a man, takes over an extremely dangerous military drone, which is defeated by a guy piloting a mind-controlled aircraft with highly advanced variable geometry.  It sounds ridiculous but I feel we're not that far away from this scenario in real life.  Artificial intelligence is advancing daily and AI pop stars already exist.  China, and probably the USA, are working on variable geometry.

Edited by Scopedog
Posted
3 hours ago, Scopedog said:

It sounds ridiculous but I feel we're not that far away from this scenario in real life.  Artificial intelligence is advancing daily and AI pop stars already exist.  China, and probably the USA, are working on variable geometry.

We are many decades, if not centuries, from having to worry about something like that.

When people think about the term AI, what they're thinking about is more often than not is what's called Artificial General Intelligence.  A computer that can think and reason like a human being.  That technology is purely science fiction for a bunch of reasons. Mainly hardware and software limitations on the computer's side.  The low end estimate of exactly how much computational power is trapped in the average human's noggin is about 1 exaflop.  That's 1 times 10 to the 18th power floating point operations per second.  And that's done on about 20 watts of power.  Exaflop-scale supercomputers became a thing for the first time in 2022, but they're about a square kilometer in size, made up of around 5,000 separate high-end processors joined by hundreds of kilometers of cabling and coolant piping, draw over 30 million watts of power to operate (nuclear power station level energy demands), and still have all the limits of a machine processing linear operations in binary.  They're not capable of fuzzy logic, abstract reasoning, or any of the other insane stuff that your squishy human brain does on a minute-by-minute basis.  This is the kind of thing that might become possible when we have quantum supercomputers... but we're still trying to figure out how to reliably store single qbits.

The AI technology that the news is fussing over is massively oversold.  LLMs like Google's Gemini, Apple's Apple Intelligence, OpenAI's ChatGPT, xAI's Grok, etc. are nothing more than extremely inefficient upscalings of the same kind of text autocomplete in your phone's onscreen keyboard app.  They possess no reasoning capability.  All they're capable of doing is probability-based pattern-matching.  Instead of just guessing the next word you might type based on probabilities from sample text, they're taking keywords and stringing together vast strings of text based purely on the probability of those words appearing in that order based on the gargantuan amount of raw text they've been fed from books and websites and so on.  Which is why they "hallucinate".  They have no capacity to actually understand the material you're exchanging with them.  It's almost an "infinite monkeys" situation, with an extremely powerful server essentially guessing wildly based purely on next word probability until it comes up with a plausible sounding string of words that it vomits up.  The art-based ones are no different.  They break sample data down into mathematical models and then string those models together based on keywords from your prompt.  Because their function is purely probability analysis-based, they can be "poisoned" with junk data that messes up those probability tables and makes them draw or talk even more nonsense than they normally do.

Those "AI" pop stars are, variously, just people in mo-cap suits with autotune steering rigged 3D models like a Vtuber or a combination of existing text, speech synthesis, and video synthesis AI software that's just running preprogrammed and vetted prompts to avoid the system spazzing out.

Will there be "AI"-powered drone weapons in the near future?  Absolutely.  Not weapons that can think for themselves, but weapons that use image recognition software to identify people or military vehicles connected to basic fire control systems.  Something broadly analogous to the QF-2200 Ghost from Macross Zero, essentially.  Something like the Ghost X-9, Sharon Apple, the Siren Delta System, Skynet, Commander Data, etc. is a sci-fi pipe dream with even the foreseeable future's technology.

Posted
47 minutes ago, Seto Kaiba said:

We are many decades, if not centuries, from having to worry about something like that.

When people think about the term AI, what they're thinking about is more often than not is what's called Artificial General Intelligence.  A computer that can think and reason like a human being.  That technology is purely science fiction for a bunch of reasons. Mainly hardware and software limitations on the computer's side.  The low end estimate of exactly how much computational power is trapped in the average human's noggin is about 1 exaflop.  That's 1 times 10 to the 18th power floating point operations per second.  And that's done on about 20 watts of power.  Exaflop-scale supercomputers became a thing for the first time in 2022, but they're about a square kilometer in size, made up of around 5,000 separate high-end processors joined by hundreds of kilometers of cabling and coolant piping, draw over 30 million watts of power to operate (nuclear power station level energy demands), and still have all the limits of a machine processing linear operations in binary.  They're not capable of fuzzy logic, abstract reasoning, or any of the other insane stuff that your squishy human brain does on a minute-by-minute basis.  This is the kind of thing that might become possible when we have quantum supercomputers... but we're still trying to figure out how to reliably store single qbits.

The AI technology that the news is fussing over is massively oversold.  LLMs like Google's Gemini, Apple's Apple Intelligence, OpenAI's ChatGPT, xAI's Grok, etc. are nothing more than extremely inefficient upscalings of the same kind of text autocomplete in your phone's onscreen keyboard app.  They possess no reasoning capability.  All they're capable of doing is probability-based pattern-matching.  Instead of just guessing the next word you might type based on probabilities from sample text, they're taking keywords and stringing together vast strings of text based purely on the probability of those words appearing in that order based on the gargantuan amount of raw text they've been fed from books and websites and so on.  Which is why they "hallucinate".  They have no capacity to actually understand the material you're exchanging with them.  It's almost an "infinite monkeys" situation, with an extremely powerful server essentially guessing wildly based purely on next word probability until it comes up with a plausible sounding string of words that it vomits up.  The art-based ones are no different.  They break sample data down into mathematical models and then string those models together based on keywords from your prompt.  Because their function is purely probability analysis-based, they can be "poisoned" with junk data that messes up those probability tables and makes them draw or talk even more nonsense than they normally do.

Those "AI" pop stars are, variously, just people in mo-cap suits with autotune steering rigged 3D models like a Vtuber or a combination of existing text, speech synthesis, and video synthesis AI software that's just running preprogrammed and vetted prompts to avoid the system spazzing out.

Will there be "AI"-powered drone weapons in the near future?  Absolutely.  Not weapons that can think for themselves, but weapons that use image recognition software to identify people or military vehicles connected to basic fire control systems.  Something broadly analogous to the QF-2200 Ghost from Macross Zero, essentially.  Something like the Ghost X-9, Sharon Apple, the Siren Delta System, Skynet, Commander Data, etc. is a sci-fi pipe dream with even the foreseeable future's technology.

I disagree:

Spoiler

Some folks out there don't even have 2 watts running their brains.... :p

 

Posted
6 hours ago, Scopedog said:

The basic story of Macross Plus, when it was released 30 years ago, is that an artificial intelligence pop idol falls in love with a man, takes over an extremely dangerous military drone, which is defeated by a guy piloting a mind-controlled aircraft with highly advanced variable geometry.  It sounds ridiculous but I feel we're not that far away from this scenario in real life.  Artificial intelligence is advancing daily and AI pop stars already exist.  China, and probably the USA, are working on variable geometry.

I don’t think it’s too far out. Some dude tried to marry a stupid hatsuna miku, and it really didn’t have an ai. AI as it stands is a long way from something that could run a military op, but we do have self driving vehicles and a lot of the tech is coming along fairly quickly. Might only be a few decades before an AI is flying a drone and able to pick targets on its own, maybe less. We already have pre programmed drones doing wild things at concerts. Might not be too far out for combat with all the conflicts popping up.

Posted

An AI controlled drone going "nuts" and needing to be shot down?  IF someone is silly enough to hook up an autonomous drone with weaponry without an off switch then we could possibly have that today.  An actual AI, nowhere near it. 

Posted
12 hours ago, Big s said:

[...] but we do have self driving vehicles and a lot of the tech is coming along fairly quickly.

Eh... as someone who works on vehicular autonomy systems professionally, we emphatically do not have self-driving vehicles yet.

It's actually a long way off, in terms of technological capability.  Much as with every other "AI" technology, the main stumbling block is processing power.  You need a very powerful computer to manage all the sensor and vehicle inputs needed to safely drive even at low speeds on city streets.  So much so that the few experimental cars certified with SAE Lv4 limited/partial autonomy have computers so large they have to be mounted on the top of larger cars like minivans or SUVs.  Those systems are only really capable of navigating a limited area on well-mapped city streets in good weather, and still require human intervention when they encounter an unsafe situation.  Those computers draw so much power in normal operation that the cars have to be fitted with auxiliary power systems just for the computer and suffer reduced range from the extra weight and electrical demand.  A true auotnomous vehicle would be SAE Level 5, which nobody has reached yet because it requires the car to be able to essentially function totally independently in any conditions and on any roads.  A computer advanced enough to do this would be prohibitively large and heavy, and draw too much power to actually put on a car.

About the best you can get in a commercially-available car is SAE Level 2 or Level 2+ autonomy, which is "Advanced Driver Assistance".  Features like lane stay or adaptive cruise control.  The car is not actually capable of driving itself.  Tesla's "full self-driving" is actually a Level 2 system that is falsely advertised as autonomous, which is why Tesla's been sued many times for false advertising and wrongful death on the part of customers who believed their fraudulent claims and died as a result of their "autonomous" car carshing into stationary objects or other cars.  (Their impressive demonstrations of autonomous capability were found to actually be staged with cars driven by remote control.)

Posted (edited)

It occurs to me, the anime doesn't make it quite clear whether Sharon Apple is operating the Ghost herself or if she just designated targets and it operated autonomously.  I think she was probably operating it herself.  She's shown to be able to multitask, like being a different version of herself when she seduces Yang at the end while controlling the SDF-1 and also bantering with Myung all at the same time.  If she was just designating targets, it would have been the YF-19 so the Ghost had no reason to face off with the YF-21 (after 30 years, the YF-21 is still so ******** cool).

Edited by Scopedog
Posted (edited)
56 minutes ago, Scopedog said:

It occurs to me, the anime doesn't make it quite clear whether Sharon Apple is operating the Ghost herself or if she just designated targets and it operated autonomously.  I think she was probably operating it herself.  She's shown to be able to multitask, like being a different version of herself when she seduces Yang at the end while controlling the SDF-1 and also bantering with Myung all at the same time.  If she was just designating targets, it would have been the YF-19 so the Ghost had no reason to face off with the YF-21 (after 30 years, the YF-21 is still so ******** cool).

She's probably operating the Ghost herself.

According to her Macross Chronicle Character Sheet, the Sharon-type AI was developed for the military by the Macross Concern's Palo Alto II Research Institute.  It was designed to be a fleet supervisory support AI for use in emigrant fleets.  Its job was twofold: to assist with managing stress among the populations of early emigrant ships (which were on the spartan side in terms of living conditions) with entertainment and subliminal audiovisual hypnosis where necessary, and to take over control of the fleet on its own should its human commanders be incapacitated during an emergency.  The career of the virtuoid idol singer "Sharon Apple" was essentially a covert test of the incomplete Sharon-type AI's entertainment and population management systems disguised as a music company's avant garde tech demo.

When Sharon Apple went crazy rampage nuts as a result of being rushed to completion with an illegal and dangerous bio-neural processor and having her emotion data sampled from a woman with more baggage than Delta Airlines, she used the command and control functions she was designed with to seize control of the Macross, the Ghost X-9, and all networked defenses on Earth.

She wasn't able to break into the YF-21's systems the way she broke into the YF-19's because, as noted earlier in the OVA, half of the YF-21's computer is the pilot's brain.

Edited by Seto Kaiba
Posted
2 hours ago, Seto Kaiba said:

You need a very powerful computer to manage all the sensor and vehicle inputs needed to safely drive even at low speeds on city streets.  So much so that the few experimental cars certified with SAE Lv4 limited/partial autonomy have computers so large they have to be mounted on the top of larger cars like minivans or SUVs.

It only took a few years to go from a computer the size of a room to be outshined by a personal computer and for that to be outshined by the smartphone. Depending on the necessity, technology moves fairly quickly and world conflicts supported by super powers tend to move technology faster than most catalysts 

Posted
1 hour ago, Seto Kaiba said:

She's probably operating the Ghost herself.

According to her Macross Chronicle Character Sheet, the Sharon-type AI was developed for the military by the Macross Concern's Palo Alto II Research Institute.  It was designed to be a fleet supervisory support AI for use in emigrant fleets.  Its job was twofold: to assist with managing stress among the populations of early emigrant ships (which were on the spartan side in terms of living conditions) with entertainment and subliminal audiovisual hypnosis where necessary, and to take over control of the fleet on its own should its human commanders be incapacitated during an emergency.  The career of the virtuoid idol singer "Sharon Apple" was essentially a covert test of the incomplete Sharon-type AI's entertainment and population management systems disguised as a music company's avant garde tech demo.

I didn't know about this background info.  I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology.

Posted
1 hour ago, Scopedog said:

I didn't know about this background info.  I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology.

You should worry more about some corporation doing so.  A government will just come in and  hook it up to weaponry after the fact.

Posted
46 minutes ago, Scopedog said:

I didn't know about this background info.  I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology.

We're not saying they aren't. But the scale of which to achieve AGI into the package of a credit card-size SOC will not be achieved in our lifetime. The AI in our world today is nowhere near Hollywood AI. Very far from it. The computer currently in robotaxis are scaled down server hardware; just enough to run the necessary applications and process the sensor data in realtime without consuming a ridiculous amount of power. The rest gets offloaded to the cloud. 

1 hour ago, Big s said:

It only took a few years to go from a computer the size of a room to be outshined by a personal computer and for that to be outshined by the smartphone. Depending on the necessity, technology moves fairly quickly and world conflicts supported by super powers tend to move technology faster than most catalysts 

Actually, it took a couple of decades. It moves fast, but not THAT fast. Economy of scale also plays into this. And unfortunately, we're going right back to room-sized computers (computer clusters that is) because personal computers do not have the computing horsepower to process models at a level to make it economical nor do most homes have the electrical power to keep it running. AI model processing is a power consuming task and that been a major hurdle keeping it in datacenters. LLMs that run on our PCs usually run on smaller data sets and require more time to process that data versus a cluster in a data center processing large datasets at record pace. 

Posted
1 hour ago, Big s said:

It only took a few years to go from a computer the size of a room to be outshined by a personal computer and for that to be outshined by the smartphone. Depending on the necessity, technology moves fairly quickly and world conflicts supported by super powers tend to move technology faster than most catalysts 

Eh... while is partially correct, the actual amount of time it took is quite a bit longer than just "a few years" and owes a lot to the switch from vacuum tubes to transistors to your modern integrated circuits.  The pace of advancement has also slowed down quite a bit in recent years because we have effectively hit the limits of what we can reasonably do with silicon in terms of improving packaging density and clock rate.  (That's actually why high-end chips like Intel's 13th and 14th gen have been burning out.  The push for ever-faster clock rates while nearing the limits of silicon's performance led to simply overclocking the chips until they started burning up.)

 

1 hour ago, Scopedog said:

I didn't know about this background info.  I'm not generally conspiracy-minded, but I find it very hard to believe that the US government isn't working on this exact type of subversive AI technology.

Some of it is mentioned in passing in Macross Plus... the Macross Concern is also the party who provided the bio-neural chip to the Venus Sound Factory team working on Sharon, and the same group who also developed the Ghost X-9 around the same AI technology.  The project's goal was to produce a next-generation unmanned fighter that could operate more flexibly on the battlefield and exhibit humanlike levels of unpredictability in combat maneuvers.

That same research is still ongoing in Macross Frontier's drama CDs, with LAI working on a next-generation Ghost that complies with the post-Sharon Apple Incident regulations on AI but can nevertheless still exhibit humanlike responses due to personality modeling AI.  (Luca's questionable judgement led him to model the prototypes on his crush and his two best friends from school.)

 

If the government were working on something like Sharon, we would know about it.  That kind of development involves hundreds of thousands of people and billions if not trillions of dollars in investment... and the government is absolute rubbish at keeping secrets at the best of times.  These are NOT the best of times when it comes to secrecy. 🤣  We know that what they are doing with AI is trying to make unmanned wingmen for manned 5th and 6th Generation fighters like what we see with Luca's Ghosts in Macross Frontier and the Lilldrakens and Super Ghosts in Macross Delta.  It's not going great, but it could be going a lot worse.  They're kind of at the "well at least it's not cartwheeling across the sky like a SpaceX rocket" phase.

(I can only assume conspiracy theorists are kids who never had to do group projects in school... and therefore have a very exaggerated and beautifully optimistic belief in how well people work together in groups. 🤣)

Posted
1 hour ago, Seto Kaiba said:

Eh... while is partially correct, the actual amount of time it took is quite a bit longer than just "a few years" and owes a lot to the switch from vacuum tubes to transistors to your modern integrated circuits.  The pace of advancement has also slowed down quite a bit in recent years because we have effectively hit the limits of what we can reasonably do with silicon in terms of improving packaging density and clock rate.  (That's actually why high-end chips like Intel's 13th and 14th gen have been burning out.  The push for ever-faster clock rates while nearing the limits of silicon's performance led to simply overclocking the chips until they started burning up.)

 

Some of it is mentioned in passing in Macross Plus... the Macross Concern is also the party who provided the bio-neural chip to the Venus Sound Factory team working on Sharon, and the same group who also developed the Ghost X-9 around the same AI technology.  The project's goal was to produce a next-generation unmanned fighter that could operate more flexibly on the battlefield and exhibit humanlike levels of unpredictability in combat maneuvers.

That same research is still ongoing in Macross Frontier's drama CDs, with LAI working on a next-generation Ghost that complies with the post-Sharon Apple Incident regulations on AI but can nevertheless still exhibit humanlike responses due to personality modeling AI.  (Luca's questionable judgement led him to model the prototypes on his crush and his two best friends from school.)

 

If the government were working on something like Sharon, we would know about it.  That kind of development involves hundreds of thousands of people and billions if not trillions of dollars in investment... and the government is absolute rubbish at keeping secrets at the best of times.  These are NOT the best of times when it comes to secrecy. 🤣  We know that what they are doing with AI is trying to make unmanned wingmen for manned 5th and 6th Generation fighters like what we see with Luca's Ghosts in Macross Frontier and the Lilldrakens and Super Ghosts in Macross Delta.  It's not going great, but it could be going a lot worse.  They're kind of at the "well at least it's not cartwheeling across the sky like a SpaceX rocket" phase.

(I can only assume conspiracy theorists are kids who never had to do group projects in school... and therefore have a very exaggerated and beautifully optimistic belief in how well people work together in groups. 🤣)

I really appreciate your responses.  What do you think about mind-controlled aircraft with advanced variable geometry (not swing-wing but actual shape-shifting materials, which the Chinese have supposedly accomplished)?

Posted
1 hour ago, Scopedog said:

I really appreciate your responses.  What do you think about mind-controlled aircraft with advanced variable geometry (not swing-wing but actual shape-shifting materials, which the Chinese have supposedly accomplished)?

There are already a lot of experiments with controlling aircraft without using arm and leg movements. Mostly eyes and temple attachments, but brainwave control might not be too far off either since they’re also doing experiments with brain chips, oddly a lot in paralysis treatment. Controlling simple functions like flaps and such might actually be easier than the stuff trying to get the upper spine to control lower leg nerves.

as far as variable shape materials, they do exist, but I think it’s a long way from doing something as complex as the wings on the YF-21. But I will say that things are further along than I thought they’d be a couple decades ago. I just really haven’t seen many practical uses for these materials due to issues with durability and trying to get the materials to do more complex movements. Most at the moment are memory type materials that usually just flex from one shape back to the original form.

Posted (edited)
1 hour ago, Scopedog said:

I really appreciate your responses.  What do you think about mind-controlled aircraft with advanced variable geometry (not swing-wing but actual shape-shifting materials, which the Chinese have supposedly accomplished)?

Rudimentary flight control using dry electroencephalographic sensors is technically quite possible. There were a number of gimmicky children's toys based on the idea of using an EEG headset to control a simple motorized toy back in the 2000s.. Star Wars even got in on it in 2009 with a toy called the force trainer, which used an EEG headset to control the PWM of a fan that would levitate a plastic ball. The fad didn't last very long, in part due to those very basic sensors not being capable of complex control, but it's proof of concept at the very least. 

As in Macross, it would be an absolutely terrible way to try to control an aircraft. The YF-21 brain direct interface had realistically unforgiving design tolerances when it came to keeping the sensors aligned with the pilot's head. Even a few millimeters of slip in that sensor hood was enough to greatly reduce the system's accuracy. As such, the YF-21 ended up needing a pilot seat that almost totally immobilized the pilot to prevent that sensor hood from shifting. (This is why, in real EEG testing, they stick the sensors directly to your scalp with gel and even that encourage you not to move.) Supplemental technical publications also suggest that, in a realistic turn, the BDI system would also need hundreds of hours of training and data collection in order to build up a translation database to allow it to convert the pilot's brainwave data into usable machine instructions. Even then, a sharp shock or strong emotion may result in a loss of control over the system due to creating noise in the recorded brainwave, much like we see happen in the OVA.  The system was ultimately much too finicky and unreliable to be practical in combat and was scrapped. 

Attitude control via wing warping is technology that goes all the way back to the earliest powered aircraft. The modern version of the concept is called the adaptive compliant wing. It's something the US was testing back in the mid '80s. Testing using a modified F-111 revealed that the concept has durability issues, and it is rather more expensive than conventional wing surfaces. Flaws that are echoed in the Macross universe's YF-21. There is an EU funded research group called flexop which is currently looking at ways to apply the technology to jet airliners as a way to save fuel through drag reduction.

Edited by Seto Kaiba
Posted
12 minutes ago, Scopedog said:

Wow, this is great feedback.  I'm going a bit off-topic now but what do you think about the Macross II bits system (or whatever they were called, I know it's an old idea in video games).  How close, or far, are we from having autonomous drones accompanying aircraft?

Maybe not too far, but it’s one of those things where the practice isn’t there. That may change though, but normally if we need the firepower of a manned aircraft, then we’d send a manned aircraft. With certain conflicts in other areas, it may be more practical to send an aircraft with other aircraft as escort, but the more things change, then it may end up with more complex dogfighting that might benefit from a launched drone to help out. But I still think that’s not something we’d see with the types of conflicts we’re seeing at the moment 

Posted (edited)
16 hours ago, Seto Kaiba said:

Eh... as someone who works on vehicular autonomy systems professionally, we emphatically do not have self-driving vehicles yet.

It's actually a long way off, in terms of technological capability.  Much as with every other "AI" technology, the main stumbling block is processing power.  You need a very powerful computer to manage all the sensor and vehicle inputs needed to safely drive even at low speeds on city streets.  So much so that the few experimental cars certified with SAE Lv4 limited/partial autonomy have computers so large they have to be mounted on the top of larger cars like minivans or SUVs.  Those systems are only really capable of navigating a limited area on well-mapped city streets in good weather, and still require human intervention when they encounter an unsafe situation.  Those computers draw so much power in normal operation that the cars have to be fitted with auxiliary power systems just for the computer and suffer reduced range from the extra weight and electrical demand.  A true auotnomous vehicle would be SAE Level 5, which nobody has reached yet because it requires the car to be able to essentially function totally independently in any conditions and on any roads.  A computer advanced enough to do this would be prohibitively large and heavy, and draw too much power to actually put on a car.

About the best you can get in a commercially-available car is SAE Level 2 or Level 2+ autonomy, which is "Advanced Driver Assistance".  Features like lane stay or adaptive cruise control.  The car is not actually capable of driving itself.  Tesla's "full self-driving" is actually a Level 2 system that is falsely advertised as autonomous, which is why Tesla's been sued many times for false advertising and wrongful death on the part of customers who believed their fraudulent claims and died as a result of their "autonomous" car carshing into stationary objects or other cars.  (Their impressive demonstrations of autonomous capability were found to actually be staged with cars driven by remote control.)

Could such a computer be land-based and have telemetry sent to it/ commands sent back via cell/ radio? I know the issues with that (loss of signal/ signal degradation, hacking vulnerability, data corruption, signal crossover, signal jamming. blocking, spots on your dishes in the dishwasher, etc.); I just thought that might be another approach...

 

Edited by pengbuzz
Posted
1 hour ago, pengbuzz said:

Could such a computer be land-based and have telemetry sent to it/ commands sent back via cell/ radio? I know the issues with that (loss of signal/ signal degradation, hacking vulnerability, data corruption, signal crossover, signal jamming. blocking, spots on your dishes in the dishwasher, etc.); I just thought that might be another approach...

 

I’ve seen Waymo’s cars getting stuck just going around in parking lots, so they tend to get confused even with the onboard hardware. I’d imagine that having everything off the vehicle might even be worse, but I’m nowhere near being an expert in these. But it was a hilarious video on YouTube with the guy trying to get to an airport and the car kept driving him in circles through a random lot while he was in the back trying to call tech support for help.

Posted (edited)
3 hours ago, pengbuzz said:

Could such a computer be land-based and have telemetry sent to it/ commands sent back via cell/ radio? I know the issues with that (loss of signal/ signal degradation, hacking vulnerability, data corruption, signal crossover, signal jamming. blocking, spots on your dishes in the dishwasher, etc.); I just thought that might be another approach...

It's possible in theory, but you absolutely wouldn't want to attempt it in practice.

Even in perfect conditions, the additional latency involved in offboard control would be a major safety risk.  You typically have between 0.7 and 1.5 seconds to react in the event of a potential collision.  Cellular data networks are not the fastest, and you'll typically see a ping of around 50-500ms depending on how good your signal is, how contested the local network is, and what network you're on.  The car would not be able to send sensor data to the cloud and get a reaction back fast enough to avoid collisions in a lot of cases.  Toss in issues with network disruptions due to weather, distance to the nearest tower, noise, jamming, etc. and it becomes a nightmare scenario.

Local control is much faster and more reliable, with an end-to-end communication and reaction time faster than what humans are capable of in most circumstances.

 

51 minutes ago, Big s said:

I’ve seen Waymo’s cars getting stuck just going around in parking lots, so they tend to get confused even with the onboard hardware. I’d imagine that having everything off the vehicle might even be worse, but I’m nowhere near being an expert in these. But it was a hilarious video on YouTube with the guy trying to get to an airport and the car kept driving him in circles through a random lot while he was in the back trying to call tech support for help.

That's one of the problems with limited onboard computer hardware... even with advanced radar, optical cameras, and LIDAR, autonomous vehicles can end up stuck when having to deal with flesh and blood drivers that dont' follow the rules of the road as rigidly as the machines do.  It's particularly bad when there are unclear markings on the road, or road markings just aren't visible due to weather or wear and tear, since they depend on those to orient themselves.  There's a prank that's sometimes pulled by drawing a do-not-cross double line in salt or spray paint around an autonomous vehicle, trapping it with its own refusal to disobey traffic laws... and this is some of the most advanced autonomous AI we have.  We're a long, LONG way from something like Sharon Apple.

Teslas are even more prone to such issues since they lack LIDAR arrays and try to get by purely with ultrasonics, radar, and optical cameras.  They often completely miss signage, fail to identify obstacles, and run into stationary objects they failed to see when visibility's poor or their camera lenses get dirty.

Edited by Seto Kaiba
Posted

WRT the previous and how it applies to drone aircraft.

Remotely operated or semiautonomous aircraft like the MQ-9 typically operate at altitudes and in areas where there are few to no collision risks.  These more forgiving conditions allow drones to adopt default behaviors like autonomously circling over an area or returning to base in the event that the control signal is lost.  They're typically low enough that a possibility of collision with another aircraft is minimal outside of intentional attempts to collide, but high enough that terrain, buildings, and foliage pose no risk either.  Being able to maneuver in three dimensions to avoid any possible collisions also makes matters far more forgiving.  Using dedicated base stations and military satellite networks helps the network congestion and latency issues too.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...