[ad_1]
AI isn’t going anywhere, and AI-powered avatar customer service is on the rise, including a recent in-person demonstration in Orlando Florida. Citing recent developments and expert perspective, the article makes the case that AI doesn’t remove human value — AI emphasizes it.
It’s no secret that AI and AI-powered avatars are already revolutionizing customer service across industries and verticals, so I jumped at the chance to get an in-person demo of the SmartMatters AI prototype kiosk from David DeFelici, VP of business development at Telecine, in Orlando Florida during the InfoComm 2023 expo. We had a great conversation about technology, AI, and our mutual love of science fiction, but what stood out to me was how fast this technology is evolving, along with the potential for accessibility and inclusion.
Up-close: an avatar in action
DeFelici showed me how the avatar can be customized to viewer preference, with a range of options for traits like voice, gender, skin tone, and more, aiming to make the technology a more inclusive experience for users (a critical component as various industries seek to fight labor shortages by incorporating the technology). And, as DeFelici explained, various companies in the field are working to add more languages to these systems to provide a truly open experience for all.
A demonstration of the SmartMatters AI avatar-based kiosk. Video edited for clarity and length. Video credit: Daniel Brown/Networld Media Group. |
While avatars can be programmed with specific knowledge bases (AI is already being used broadly, including the automation of cookie dough for cookies), many prototypes rely on the sort of general, web-based information model favored in systems like ChatGPT (or those based on it). After the avatar performed well on a question about Digital Signage Today, I had to test its boundaries (true to my mischievous streak), asking a question about physics, and once again, the system provided a surprisingly cogent answer.
Discovering we shared an interest in physics, DeFelici demonstrated how the avatar can provide deep-dive answers in addition to general responses, asking the system to tell us about quantum physics (a general question) and quantum entanglement (a deeper, more specialized question.) I was grudgingly impressed — I’ve been open about my love-hate relationship with this area of technology, but even in the face of my skepticism, I was impressed with several areas of the demo.
I was also blown away by the quality of the high-definition character and the audio. Even in the brief months since my first dedicated AI avatar feature, the technology has grown exponentially. That said, there are obvious questions about latency related to things like internet connection quality; the system has to take a few moments to “think” about a question (i.e., in this case, transmitting it to the relevant servers for processing), though for the breadth and depth of questions, the response time was obviously far quicker than most humans could provide if put in the same situation.
New era or hype cycle?
As the technology rolls out across industries, some experts worry about the well-known “hallucination” problem, in which AI systems make up plausible-sounding answers for some queries, landing one legal team in hot water for using ChatGPT in arguments that, it turned out, cited fictional cases. This June, CNN reported on a lawsuit filed in California against OpenAI, the company operating ChatGPT, alleging illicit use of personal data on the Internet in the development of the software.
Legal and medical use cases aside, the technology is already seeing rapid testing and adoption in use cases like drive-thru orders — companies like SoundHound are developing voice ordering powered by AI, for example (see below for a video demo from the company).
“As the Dynamic Interaction demo shows, this technology is incredibly user-friendly and precise,” Keyvan Mohajer, co-founder and CEO at SoundHound, said in a statement this February. “Consumers won’t have to modify how they speak to the voice assistant to get a useful response — they can just speak as naturally as they would to a human. As an added bonus they’ll also have the means to instantly know and edit registered requests. In our 17-year history of developing cutting-edge voice AI, this is perhaps the most important technical leap forward. We believe, just like how Apple’s multi-touch technology leapfrogged touch interfaces in 2007, this is a significant disruption in human-computer interfaces.”
SoundHound posted this public demo of its AI-powered technology on YouTube. Credit: SoundHound. |
Of course, AI developers continue facing hurdles, such as preventing systems from generating offensive or inaccurate responses and content, as well as regulatory actions. For example, NBC reported on July 5 that New York City has passed a law stating that hiring software using AI or machine learning to sort candidates into potential hires or automatic rejections (also known as Automated Employment Decision Tools or AEDT) must pass evaluation from a 3rd party company for biases, including racism and sexism.
Still, there’s no denying that the future is well on its way, and AI avatars are a big piece of the customer service side of that future, and probably sooner than most people realize. I was reminded of the time I asked AI expert Pete Erickson, founder of Modev, what kind of timeline we’re looking at earlier this year.
“Well, I would say, if you go through a drive thru in 2025, most the time you’re going to be talking to an assistant,” Erickson told me via video link. “If you go to a retail store in 2025, if that retail store doesn’t have an assistant on most every aisle to help you there, they’re way behind. Yeah, so I think we’re not that far off.”
As I stood in sweaty Orlando this June, listening to an AI avatar lecturing on quantum mechanics and recommending local restaurants within a ten minute drive based on my preferred cuisine, that kind of timeline started making a lot more sense to me.
It’s alive(?)
Though, as so many experts have cautioned me in interviews and conversations, the branding and pop-culture use of the term “AI” may still be a little misguided; I think the most pithy explanation I’ve yet received of the problem came from David Colleen, inventor, technologist, and CEO at SapientX, who has been working in and around AI for decades.
“The machine learning systems that are in the news so much today — they’re exciting, but you have to think of them as Myna birds,” David Colleen told me via video link earlier this year. “Even systems like GPT3 only repeats stuff that it hears, so to speak, on the internet. And it really doesn’t apply any intelligence to the use of what it hears.”
While newer iterations of the technology use massive leaps in computing power to generate what often feel like intelligent interactions, more and more experts cite reservations about how much intelligence is underneath the simulacrum of dialogue, often citing the “bitter lesson” theory posited by computer scientist Rich Sutton in a 2019 essay:
“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin,” Sutton wrote. “The ultimate reason for this is Moore’s law, or rather its generalization of continued exponentially falling cost per unit of computation.”
Citing examples like the system that brought the first computer victory over a chess grandmaster, Sutton’s theory is roughly in line with an old argument: a brute force approach to simulating intelligence, rather than a human-shaped approach that seeks to create specialized knowledge and, in theory, eventually a genuine form of intelligence or consciousness (the sort of Age of Spiritual Machines heralded by futurist Ray Kurzweil and others).
A copy of “The Bitter Lesson,” by Rich Sutton. Credit: University of Texas at Austin, Computer Science department. |
AI doesn’t remove human value — AI emphasizes it
In the meantime, as AI systems including avatars continue their explosive popularity across virtually every area of business and life, I see increased evidence that instead of replacing the human element of business, technology, content, art, and life, AI is actually emphasizing that element and making it ever more important, across categories, from content creation and customer service to the human workforce in industrial settings. And I’m not the only one.
In an article published this Sunday called “Remaining Relevant in an AI Age,” business and technology expert Rishad Tobaccowala argues that to be successful in this new AI-powered landscape, people must embrace, adapt, and complement AI, offering robust strategies for each element of this approach. His closing thoughts are worth sharing for anyone interested or concerned about these questions, and they echoed similar themes in Jan Diekmann’s powerful keynote at this year’s Detroit Automate expo, arguing that Industrial Revolution 5.0 is “compassion” — the harmonious blending of the human with the machine.
“Successful individuals and companies will complement the power of computing machines and software,” Tobaccowala wrote. “They will do this by enhancing, training, and bending what the technology can enable with creativity, storytelling, empathy, provenance, humanity, insight and imagination.
“We need to learn and feed this inside us. The future will be about data driven storytelling and not just data or storytelling and the ability to leverage modern machines and algorithms to unleash connection and meaning will depend on creativity… Time has proven that technology while bringing with it risks and downsides over time is a massive positive force for humanity. The future is bright and all we must do is open our eyes, heart and minds and seize the benefits of this amazing era.”
When it came time to find an image for publication, it all came together in a neat package. I’ve been watching Adobe Firefly since the March announcement that Adobe was going all in with an ethics-informed approach to generative AI — for instance, artists can tag their art as “do not train,” and Adobe says it will never train its Firefly AI on those artworks. “Generative AI is the next evolution of AI-driven creativity and productivity, transforming the conversation between creator and computer into something more natural, intuitive and powerful,” David Wadhwani, president of digital media business at Adobe, said in a press release. “With Firefly, Adobe will bring generative AI-powered ‘creative ingredients’ directly into customers’ workflows, increasing productivity and creative expression for all creators from high-end creative professionals to the long tail of the creator economy.”
While I have experimented with other platforms, I haven’t always been happy with the impact these platforms have had on other creatives and artists in the creation and deployment of those models. However, I was sufficiently impressed by Adobe’s efforts to protect human creatives while using AI to enhance creation generally that I used my first-ever AI-generated image on an article, which illustrate one of many ways where human-machine synergy works beautifully when executed thoughtfully. And it’s exactly that synergy, promised by innovations like the prototype I saw in Orlando, that I hope define this moment for business, technology, and society itself.
Daniel Brown is the editor of Digital Signage Today. He is an accomplished technology writer whose experience includes creating knowledge base content for a major university’s computing services department. His previous experience also includes IT project management, technical support and education. He can usually be found in a coffee shop near a large pile of books.
[ad_2]
Source link