A deeper look at what artificial intelligence actually is
Has AI become more of a marketing tool, a buzzword that gets attention and makes companies look like they’re on the cutting edge?
What, specifically, defines artificial intelligence? Is AI simply a fancy name for the robots you see in movies like Terminator and Wall-E? What makes a machine artificially intelligent, as opposed to just being useful?
Bogost cuts to the chase when it comes to what qualifies as AI: “Machines warrant the name AI when they become sentient—or at least self-aware enough to act with expertise, not to mention volition and surprise. … in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.”
Although even the most advanced robotics software isn’t technically considered AI, it could be the blueprint for a future AI version. For instance, the da Vinci Si robotic system, used to assist lung surgery, doesn’t meet the definition of AI. However, it’s providing more precision for surgeons and a shorter recovery time for patients.
According to Rush University, where the system is frequently used, the surgeon operates “while viewing a high-definition, 3D image from inside the patient’s body. The system translates his hand movements into precise, real-time movements of the video camera and surgical instruments that are attached to the platform’s robotic arms.”
Evolutionary leaps in AI always begin with current technology adapted to become autonomous; the da Vinci Si robotic system of today is no exception. This technology has the potential to become the prototype for tomorrow’s AI advances in surgery.
Think simple: AI has a practical use
Artificial intelligence describes technology that encompasses more than autonomous robots that imitate human intelligence. While autonomous robots are cool, they’re not useful (yet) for the average person.
Other examples of AI are far more practical. For instance, since 2009, Google has been developing a self-driving car it hopes to fully release by 2020. Now called Waymo (a new way forward in mobility), the project aims to “make it safe and easy for people and things to move around.”
Google’s driverless car project began as 22 modified Lexus RX450h SUVs, and has grown to include 33 prototypes of its own design. The cars have been tested on public roads with passengers who have access to a steering wheel and a brake—just in case. But the official release won’t include any manual controls.
“Google believes that its AI self-driving system will consistently make the smartest, safest decision for the occupants of the vehicle as well as pedestrians or other users sharing the road; safer than even a human driver,” says Kirsten Korosec from Fortune.com. “And so the company is worried that giving human occupants of the vehicle mechanisms that controls things like steering, acceleration, braking, or turn signals is actually detrimental to safety because it can override the safer decision made by AI.”
Currently, it’s illegal for autonomous cars to be used on public roads without a steering wheel and a brake. Ironically, the National Highway Transportation and Safety Administration considers the AI system controlling the cars a legal driver under federal law.
The cars can be driven, but they can’t be driven without manual controls yet.
Going back to Bogost’s article, he points out that AI has become more of a marketing tool for corporations. It’s a buzzword that gets attention and makes the corporation look like they’re on the cutting-edge.
Unfortunately, the misconception of AI has infiltrated announcements coming from well-known companies. For example, in response to the proliferation of abusive and spammy comments, Twitter announced it’s continuously working on “making their AI smarter.” Bogost points out that making changes to database queries hardly counts as AI—though to be fair, Twitter probably doesn’t know what AI really is.
If AI isn’t automation, software, simple machine learning, content filters, or database tweaks, what is it?
Self-governance and learning are what makes AI
For something to qualify as AI, it needs to learn in response to its environment—just like every living creature on planet Earth learns to adapt to an ever-changing environment. It also requires the ability to exhibit self-governance, and the element of surprise.
In other words, AI isn’t a robot that replaces human workers and does what it’s programmed to do. It’s something that can “think” outside of its program and exhibit the unexpected; just like in the movies.
This article was written by Larry Alton from InfoWorld and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to firstname.lastname@example.org.
Subscribe to our blog
Explore the possibilities a Gigabit Society can bring to your business. Receive a monthly digest straight to your inbox.