The Chicago Journal

Alexa’s Transformation: From Robotic Assistant to Conversational Genius

Amazon’s Alexa is set to undergo a groundbreaking transformation, thanks to the integration of generative AI technology. In a recent announcement made at Amazon’s new Arlington, Virginia campus, the company revealed its plans to revolutionize the way we interact with its ubiquitous voice assistant. This update promises to make Alexa sound and function more human-like, opening up a world of possibilities for users. In this article, we’ll delve into the exciting changes that Alexa is about to undergo and their potential implications.

The Emergence of Generative AI in Home Devices

At the heart of this transformation lies the introduction of generative AI to Alexa. This cutting-edge technology enables artificial intelligence to create new content, such as text and images, in response to user prompts. In simpler terms, it empowers Alexa to engage in more natural and dynamic conversations with users, mimicking human-like interactions.

A Conversational Revolution

With this generative AI update, Alexa is poised to become a true conversation partner. It will now have the ability to resume conversations without requiring a wake word, respond promptly, learn user preferences, handle follow-up questions, and even adjust its tone based on the context of the discussion. Additionally, Alexa will offer opinions, injecting personality and depth into interactions, such as providing insights on Oscar-worthy movies.

A Human Touch

The goal of these updates is to make interactions with Alexa feel just like conversing with a real human being. This shift towards a more human touch in AI interactions could have profound implications for how we integrate technology into our daily lives.

Staying Ahead in the AI Arms Race

Amazon’s move to enhance Alexa’s capabilities comes as the tech industry experiences a surge in conversational AI tools. This development underscores the rapid evolution of consumer expectations when it comes to tech products. While the company has not specified when these updates will roll out, it’s clear that Amazon is determined to remain at the forefront of AI innovation.

Privacy and Security Considerations

As AI becomes more integrated into our lives, questions about privacy and security inevitably arise. While Amazon has not provided detailed safeguards, the company has pledged to prioritize customer privacy and security while ensuring transparency and control.

Collaboration and Integration

Amazon is also opening doors for collaboration through developer tools, allowing companies to work alongside its large language model. This collaborative approach could lead to exciting new applications of generative AI, such as in-car voice assistant capabilities, as evidenced by partnerships with companies like BMW.

The Future of Generative AI in Everyday Life

Analysts believe that these advancements mark a significant step forward in bringing generative AI into our homes and making it a part of our daily routines. The technology’s ability to understand and produce natural speech is poised to revolutionize the way we use AI in our everyday lives.

Accessibility for Users

Amazon is committed to making these updates accessible to users. Some US users will have the opportunity to experience the changes through a free preview on existing Echo devices. This widespread availability reflects Amazon’s dedication to integrating generative AI across its product lineup, from speakers to microwaves.

Expanding the Reach of Generative AI

Beyond Alexa, Amazon plans to extend generative AI to its Fire TV platform. This expansion will enable users to ask more natural and open-ended questions about genres, storylines, and content, enhancing the overall user experience.

A Decade of Transformation

Alexa, launched nearly a decade ago, was part of a movement that promised to revolutionize human-technology interactions. With the introduction of generative AI and the growing success of AI models like ChatGPT, this promise is evolving faster than ever, impacting a wide range of everyday products.

Alexa’s Continuing Innovation

Despite challenges and cost-cutting measures, Amazon remains dedicated to advancing Alexa’s capabilities. The company’s ultimate goal is to create a conversational AI that rivals the ‘Star Trek’ computer, serving as the ultimate source of knowledge in every aspect of our lives.


The introduction of generative AI technology to Amazon’s Alexa heralds a new era of human-like interactions with voice assistants. These updates promise to redefine how we integrate AI into our daily lives, presenting both opportunities and challenges. As Amazon continues to innovate, Alexa’s transformation opens the door to a future where AI seamlessly blends into our everyday routines.

BuzzFeed wants to take advantage of AI

Buzzfeed: The majority of people would be relieved to finally take time off from work, but others would elect to spend extra in the office.

Jonah Peretti, co-founder and CEO of BuzzFeed, had the opportunity to learn more about artificial intelligence during the holidays.

Peretti is one of those individuals willing to explore cutting-edge technologies.

Over time, he learned more about how AI functions and gained understanding of potential effects on the media industry.

Given how frequently BuzzFeed has employed AI, it is hardly surprising.

Late last month, Peretti took some time to reflect on how the core values of BuzzFeed may be applied to the emerging field of AI writing technology while at his California home.

The news

Jonah Peretti said in an interview on Thursday that he and a few employees prototyped methods to use technology to enhance popular quizzes, interactive articles, and other things.

Peretti asserted that he had fun, saying, “It started to feel like we were all playing.”

As a part of their approach of treating it as fun work, many Google docs were produced, which had ramifications for how BuzzFeed and the technology may be integrated into the platform.

According to the documents, Peretti might be able to expand it to other configurations.


Jonah Peretti formally said on Thursday that BuzzFeed will collaborate with OpenAI, the company founded by ChatGPT, as a result of the experiences.

In addition to incorporating AI into their “core business,” OpenAI is able to produce audience content thanks to their partnership.

Peretti acknowledged that some readers of the headline would think BuzzFeed was replacing people with machines.

Although he insisted it wasn’t the technology’s intended use, he thought other businesses would follow such a path.

“I think there are two paths for AI in digital media,” said Peretti.

Read also: MacBook to perform better with two new M2 chips

“One path is the obvious path that a lot of people will do– but it’s a depressing path – using the technology for cost savings and spamming out a bunch of SEO articles that are lower quality than what a journalist can do, but a tenth of the cost.”

“That’s one vision, but to me, that’s a depressing vision and shortsighted vision because in the long run, it’s not going to work.”

He continued, saying:

“The other path, which is the one that gets me really excited, is the new model for digital media that is more personalized, more creative, more dynamic – where really talented people who work at our company are able to use AI together and entertain and personalize more than you could ever do without AI.”

Jonah Peretti sees artificial intelligence (AI) as a tool, not a substitute, to help his team work more efficiently.

BuzzFeed quiz

The business utilized the BuzzFeed questionnaire to outline Peretti’s precise objective.

Humans construct the questions and answers using the users’ input.

Instead, staff members might use AI to develop highly tailored solutions when the application answers visitor queries.

For instance, the user may respond to a small survey while the AI uses the information to create a brief piece of RomCom content.

“We don’t have to train the AI to be as good as the BuzzFeed writers because we have the Buzzfeed writers,” Peretti explained.

“So they can inject language, ideas, cultural currency and write them into prompts and the format.”

“And then the AI pulls it together and creates a new piece of content.”

A different path

Jonah Peretti has no interest in using artificial intelligence to produce news stories in place of actual journalists.

He emphasized how badly CNET, a tech newspaper, was affected.

“There’s the CNET path, and then there is the path that BuzzFeed is focused on,” said Peretti.

“One is about costs and volume of content, and one is about ability.”

“Even if there are a lot of bad actors who try to use AI to make content farms, it won’t win in the long run,” he continued.

“I think the content farm model of AI will feel very depressing and dystopian.”

Grammy Awards post strict rules for AI-generated music

Grammy — Technology has been a crucial instrument in music creation, and musicians have risen to fame in recent decades as a result of years of invention.

Cher pioneered the usage of auto-tune in the late 1990s with her hit “Believe,” ushering in a new era of music production.

Auto-tune would become a big contributor to various performers throughout the 2000s and 2010s.

As 2023 approached, artificial intelligence began to gain traction in a variety of industries, and it wouldn’t be long until music was exposed to a new set of revolutionary tools.

While widely debated, the usage of AI-generated music has proven difficult to ignore, even for the prestigious Grammy Awards.

Many people were surprised when the grand stage of the music business issued its decision on the usage of AI-generated music.

It does, however, come with certain tight rules.

Read also: Artificial intelligence makes its way into neuroscience with new system development


The Recording Academy, the institution in charge of the Grammy Awards, established new guidelines ahead of the 2024 ceremony.

The guidelines state that “human creators” are eligible.

While songs featuring AI-generated music can be nominated, evidence of human contribution is required.

“If there’s an AI voice singing the song or AI instrumentation, we’ll consider it,” said Harvey Mason Jr., CEO of the Recording Academy.

“But in a songwriting-based category, it has to have been written mostly by a human.”

The guideline

The 66th Grammy Awards will be held on February 24, 2024, and the guidelines give further information on how they will address AI-generated music.

“The Grammy Award recognizes excellence. Only human creators are eligible to be submitted for consideration for, nominated for, or win a Grammy Award,” it wrote.

“A work that contains no human authorship is not eligible in any Categories. A work that features elements of AI material is eligible in applicable categories.”

“However: the human authorship component of the work submitted must be meaningful and more than de minimis; such human authorship component must be relevant to the category in which such work is entered.”

“The authors of any AI material incorporated into the work are not eligible to be nominees or Grammy recipients insofar as their contribution to the portion of the work that consists of such AI material is concerned.”

Accommodating the future

Mason admitted that artificial intelligence will affect the music industry’s future.

He encouraged the Grammys to address AI-related concerns rather than dismiss them.

“How can we adapt to accommodate? How can we set guardrails and standards?” Mason challenged.

“There are a lot of things that need to be addressed around AI as it relates to our industry.”

Deepfake vocals

The Grammy rules come at a time when AI-generated music has taken over the internet, with deepfake tunes becoming viral.

For example, David Guetta, a two-time Grammy winner, used AI to replace Eminem on an Eminem single earlier this year.

The fake single “Heart on My Sleeve” featuring AI-generated Drake and The Weeknd, on the other hand, received the most attention.

Universal Music Group ordered that the music be deleted, claiming copyright violations.

According to a UMG spokesman, the AI-generated song posed an important dilemma to music industry stakeholders, pushing them to choose between the side of humanity and the need for creative expression and the side of fraud that denies artists their just recompense.

Universal Music Group allegedly requested that streaming giants Spotify and Apple Music prohibit AI software startups from utilizing the label’s music.

As indicated by the increase of deepfake tracks, AI software businesses generally train their technology with the label’s songs.

Artists and AI

Sir Paul McCartney has announced plans for a “final Beatles record” to be released through AI with the assistance of filmmaker Peter Jackson.

“He [Jackson] was able to extricate John’s voice from a ropey little bit of cassette that had John [Lennon]’s voice and a piano,” said McCartney.

“He could separate them with AI — he could tell the machine, “That’s the voice, that’s the guitar, lose the guitar,” and he did that, so it has great uses.”

“We were able to take John’s voice and get it pure through this AI,” McCartney added.

“So then we could mix the record as you would normally do.”

Meanwhile, songwriter Don McLean claims that AI-generated music will not be “worse” than some of today’s tracks.

However, Daft Punk’s Thomas Bangalter says AI’s prominence isn’t what his old band stood for.

“We tried to use these machines to express something extremely moving that a machine cannot feel, but a human can,” said Bangalter.

“We were always on the side of humanity and not on the side of technology.”

Artificial intelligence makes its way into neuroscience with new system development

Artificial intelligence While Artificial intelligence has recently gained traction in technology, it is also gaining traction in science.

Artificial intelligence is being researched by scientists from a variety of disciplines.

For example, a peer-reviewed study published in the Monday issue of Nature Neuroscience magazine described how it may be applied to brain activity.

Scientists developed a noninvasive Artificial intelligence system that can convert people’s brain activity into a stream of text, according to the research.

Artificial intelligence & neuroscience

Artificial intelligence  can improve neuroscience by improving the efficiency and precision of large-scale neuroscience dataset evaluation.

It has the promise of producing more accurate models of neural systems and processes.

Artificial intelligence can also assist in the development of innovative neurological diagnostic and therapeutic methods.

The system

The system is known as a semantic decoder.

It may be beneficial to persons who have lost their physical ability to communicate as a result of a stroke, paralysis, or other degenerative illnesses.

The technique was developed by academics at the University of Texas in Austin using a transformer model.

The transformer concept is comparable to that of OpenAI’s ChatGPT and Google’s Bard.

The most recent study’s participants learned how to utilize an fMRI machine’s decoder by listening to hours of podcasts.

It’s also a larger piece of equipment used to monitor brain activity.

A surgical implant is not required for the semantic decoder.


Artificial intelligence may aid neuroscience in establishing techniques for thoughts to become text by using machine learning algorithms to investigate brain activity patterns connected to language processing.

By analyzing patterns of brain activity and then utilizing this information to create corresponding text output, Artificial intelligence systems may distinguish specific words or phrases that a person is thinking about.

This technology has the potential to revolutionize communication for individuals who are unable to speak or type, such as those suffering from severe paralysis or communication issues.

More research is needed, however, to improve these systems’ precision and dependability, as well as take on the ethical and privacy concerns associated with accessing and interpreting people’s thoughts.


The Artificial intelligence system creates a stream of text when people listen to or anticipate hearing a new tale.

Although the text created was not an exact transcription, the researchers wanted it to express key principles or ideas.

According to a recent news release, the trained system produces language that around half of the time closely fits the intended context of the participant’s original thinking.

When a research participant hears the words “I don’t have my driver’s license yet,” their immediate impression is “She hasn’t even begun to learn to drive yet.”

Read also: Softbank to Sell Alibaba Stake, Causing Share Drop

The absence of implants

Alexander Huth, one of the study’s major researchers, stated:

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences.”

“We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

The semantic decoder, unlike earlier decoding systems under development, does not require surgical implants, making it noninvasive.

Furthermore, participants aren’t obligated to use only terms from a specified list.

Potential misuse

Concerns regarding the technology’s potential misuse were also addressed by the researchers.

The researchers discovered that decoding only worked when volunteers offered to educate the decoder.

Individuals who did not use the decoder produced results that were incomprehensible.

People who used the decoder but showed resistance produced ineffective results.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said researcher Jerry Tang.

“We want to make sure people only use these types of technologies when they want to and that it helps them.”

An fMRI machine can only be utilized in the laboratory due to the time required.

The findings might be extended to other, more portable brain-imaging methods, such as functional near-infrared spectroscopy (fNIRS), according to the researchers.

“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” said Huth.

“So, our exact kind of approach should translate to fNIRS.”

Snapchat and AI: a parent’s new headache

Snapchat With the AI revolution in full swing, it’s no surprise that nearly every technological company wants a piece of the action.

Snapchat, the multimedia instant messaging service, has just created My AI in order to compete in the field of artificial intelligence.

While technology has many advantages, it also causes stress in teenagers and parents.

Read also: ChatGPT already presenting a challenge for schoolwork

What happened?

Lyndi Lee of East Prairie, Missouri, warned her 13-year-old daughter against using the function.

Lee, a software developer, is concerned about how My AI is seen by younger users.

“It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee.

“I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view.”

“I just think there is a really clear line [Snapchat] is crossing.”


Snapchat’s newest feature was just released.

It is powered by ChatGPT, a platform that offers ideas, answers queries, and interacts with users.

However, there are some significant differences:

  • Users can customize the chatbots name
  • Users can design custom Bitmoji avatars for the AI
  • Users can bring conversations with friends

Interacting with the chatbot may appear to be less transactional than interacting with the ChatGPT website.

It also makes it difficult to tell the difference between engaging with humans and communicating with robots.


Because of privacy issues, Snapchat’s new feature has received scathing criticism in app stores and on social media.

Other users have complained about creepy exchanges and the inability to remove My AI from their chat feed without purchasing a premium subscription.

Despite the fact that some people find the tool useful, the mixed response highlights the risks that businesses face when incorporating generative AI technology into their products, especially for brands with young audiences like Snapchat.

Snapchat was one of the early launch partners when OpenAI made ChatGPT open to third-party firms.

Snapchat almost immediately prompted families and governments to express hitherto unseen worries.

Democratic Senator Michael Bennett wrote to the CEOs of Snap and other technological companies in March, just after My AI was made available to Snap’s membership customers.

Bennet is skeptical of the chatbot’s interactions with younger users.

According to reports, it may demonstrate how children may fool their parents.

“These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote.

“Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

Snap responded recently, saying:

“My AI is far from perfect, but we’ve made a lot of progress.”


Snapchat users have been concerned since its official introduction.

After the chatbot lied about not knowing where he was, one user described his interaction with it as frightening.

After changing the tone of the conversation, the chatbot revealed that he lived in Colorado.

Ariel used My AI’s beginning, chorus, and piano sounds to compose a song about what it’s like to be a chatbot in a TikTok video.

When she returned the song, the chatbot denied any involvement, saying:

“I’m sorry, but as an AI language model, I don’t write songs.”

Snapchat claimed that it will continue to develop My AI based on user feedback, as well as implement additional security measures to keep users safe.

According to the firm, users can also choose not to interact with My AI.

Dropping My AI from conversation streams, on the other hand, requires a paid subscription to Snapchat+.

Some caved and turned off the tool before canceling the service.

Teens and chatbots

ChatGPT has previously been criticized for giving misleading information, behaving inappropriately with users, and allowing students to cheat.

Integrating Snapchat, on the other hand, may worsen current challenges while introducing new ones.

According to New York clinical psychologist Alexandra Hamlet, several patients’ parents were concerned about how their adolescent might use Snapchat.

Concerns have also been expressed about chatbot guidance, particularly in the context of mental health.

Artificial intelligence technologies have the ability to increase a person’s confirmation bias and drive them to seek out individuals who support their wrong beliefs.

“If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” said Hamlet.

“Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot.”

“In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

Parents, according to WAYE founder Sinead Bovell, must make it clear that the chatbot is not a friend.

“They’re also not your therapist or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” said Bovell.

“Parents should be talking to their kids now about how they shouldn’t share anything with a chatbot that they would a friend – even though, from a user design perspective, the chatbot exists in the same corner of Snapchat.”

Microsoft AI actually had errors in demo last week

Microsoft Last week, two key tech firms engaged in a contest to showcase their progress in artificial intelligence.

Early versions of the AI-powered search engines from Google and Microsoft were on exhibit at two different gatherings.

The fact that Microsoft’s event took place a day before Google’s, whose disastrous failure resulted in a drop in Alphabet’s shares, gave them an edge.

More than a million people attempted to join and utilize Microsoft’s new tool in the first 48 hours as a result of the presentation’s widespread notice.

According to Microsoft CEO Satya Nadella, the industrial revolution may have “brought to knowledge work” technology.

Its AI validated concerns regarding accuracy, therefore the achievement was not without problems.

The demo

In the experiment, the ChatGPT-inspired AI system of the Bing search engine examined financial data, including that from Gap and Lululemon.

The chatbot’s results showed a fault in that it overlooked certain data when compared to the actual reports.

Additionally, the viewers noticed that some of the figures looked to be false.

Dmitri Brereton, an independent search researcher, wrote the following on Monday in a post on Substack:

“Bing AI got some answers completely wrong during their demo. But no one noticed. Instead, everyone jumped on the Bing hype train.”

Brereton also drew attention to apparent factual flaws with the demo’s irregularities in the specifications of the vacuum cleaner and the travel plans to Mexico.

The researcher said he wasn’t purposefully trying to find errors.

Brereton didn’t become aware of the errors until he tried to compare the Microsoft and Google AI reveals in his writing.

Meanwhile, AI experts referred to the errors as “hallucinations.”

The tendency of tools to create data based on in-depth language models is referred to as hallucinations in artificial intelligence.

When Google put on a similar event, its AI system also produced factual errors that were easily detected.

AI and search engines

Google and Microsoft are working to integrate new types of generative AI into their search engines as a way to showcase their progress.

The rivalry intensified after OpenAI debuted ChatGPT in November.

Microsoft granted OpenAI billions of dollars.

Due to billion-dollar valuations in private financing rounds, several companies, such Stability AI and Hugging Face, had great growth during this time.

Read also: BuzzFeed wants to take advantage of AI

Google, on the other hand, was wary of incorporating AI-generated answers into its search engines because it needed to maintain its image for providing the best results.

The business had safety worries as well.

However, during its introduction, Microsoft emphasized the potential short-term exposure of portions of the public to its technology.

“I think it’s important not to be in a lab,” Nadella added. “You have to get these things out safely.”

Demo problems

When Bing debuted their AI solution, there were problems with the corporate profitability outcomes.

Yusuf Mehdi, a Microsoft marketing executive, went to the Gap investor relations website and gave Bing AI instructions to emphasize the company’s November third-quarter numbers.

The AI-generated results showed that the summary included the following errors:

  • The stated gross margin for Gap was 37.4%, but once Yeezy was dropped, it rose to 38.7%.
  • The company’s operating margin was 4.6% as opposed to 5.9% (Gap’s report omitted this information).
  • Diluted earnings per share (adjusted) were $0.71 as opposed to the $0.42 that was reported. The report from Gap showed an adjusted income tax benefit of nearly $0.33.
  • According to Gap, net sales would decrease sequentially in the fourth quarter by the mid-single digits, which will result in lower revenue for the whole year. The operating margin is not given a forecast, however.


Microsoft is aware of the errors and expects the Bing AI to keep making them.

“We’re aware of this report and have analyzed its findings in our efforts to improve this experience,” said a Microsoft spokesperson.

“We recognize that there is still work to be done and are expecting that the system may make mistakes during this preview period, which is why the feedback is critical, so we can learn and help the models get better.”

Robots prove clinical to restaurant industry this year

Robots: The hospitality industry, and restaurants in particular, have adjusted their strategies to incorporate more technology in recent years.

Recently, more AI has been incorporated into restaurants.

For instance, Chipotle Mexican Grill is testing whether robots can make tortilla chips at some of its branches.

Meanwhile, two Sweetgreen locations intend to automate the creation of their salads.

Starbucks wants to upgrade its coffee-brewing equipment to lighten the workload for its baristas.

The progress so far

In 2022, the restaurant industry announced a number of automation initiatives.

Operators scrambled to find solutions for the diminishing staff and growing wages, which led to the decision.

However, efforts have varied during the course of the year.

It will be years before utilizing robots pays off for businesses or replaces employees, according to experts.

David Henkes, the principal of the restaurant industry analysis firm Technomic, said:

“I think there’s a lot of experimentation that is going to lead us somewhere at some point.”

“But we’re still a very labor intensive, labor-driven industry.”

Early struggles

Prior to the pandemic, hiring and retaining workers was a challenge for restaurants.

Those who were laid off looked for other jobs as the pandemic just made the issue more evident.

The National Restaurant Association reports that a shortage of competent workers prevents three-quarters of restaurants from operating at full capacity.

Although restaurant operators increased pay to entice personnel, the rising cost of food also put pressure on profits.

Automation-focused startups presented themselves as the solution, saying that robots are more dependable than burnt-out humans at completing tasks.

They noted that artificial intelligence allows for more precise drive-thru order entry into computers.

Read also: Tesla’s AI Day introduces Optimus, the company’s first humanoid robot


Most of the announcements in 2022 came from Miso Robotics, which secured $108 million in November.

They were valued at $523 million, according to Pitchbook.

The company’s most significant invention is a robot named Flippy.

Flippy may be configured to prepare chicken wings and burgers for a monthly rental fee of $3,000.

White Castle promised to install 100 additional Flippy models while renovating four locations.

A new tortilla chip-making robot named Chippy is now being tested by Chipotle Mexican Grill at a site in California.

Miso’s CEO, Mike Bell, stated:

“The highest value benefit that we bring to a restaurant is not to reduce their expenses, but to allow them to sell more and generate a profit.”

Flippy hasn’t been able to go past the testing phase at Buffalo Wild Wings after operating there for more than a year.

Other progress

One of the privately held startups that Inspire Brands claimed it collaborated with to automate the frying of chicken wings is called Miso.

Startup Picnic Works produces equipment for adding cheese, sauce, and other condiments on top of pizza.

A Domino’s franchise is now testing the technology in Berlin.

As a starting point, Picnic Works charges $3,250 per month to hire out its equipment.

CEO Clayton Wood claims that the subscription makes the technology more affordable for smaller businesses.

According to Pitchbook, Picnic Works raised $13.8 million at a $58.8 million valuation.

Panera Bread has been testing automated ordering using AI technologies.

It also has a temperature and volume tracking Miso system to improve the quality of the coffee.

“Automation is one word, and a lot of people go right to robotics and a robot flipping burgers or making fries,” said Panera Bread chief digital officer George Hanson.

“That is not our focus.”

Even with the advancements, success is not guaranteed.

Beginning in 2020, Zume ceased employing robots to prepare, cook, and deliver food.

Instead, the company focused on food packaging.


Workers and labor advocates frequently criticize employers for eliminating jobs through the use of robots and automation in the workplace.

Meanwhile, restaurant operators have touted their efforts as a way to improve working conditions and eliminate more challenging tasks.

The process of creating salads will be automated at two new Sweetgreen locations that will be built next year using technology created by the startup Spyce.

The new restaurant model, according to Nic Jammet, co-founder and CCO of Sweetgreen, requires fewer workers for shifts.

Jammet noted that lower turnover rates and more employee satisfaction were secondary advantages.

According to Dalhousie University economist Casey Warman, the industry’s penchant for automation will lead to a permanent drop in the number of workers.

“Once the machines are in place, they’re not going to go backwards, especially if there’s large cost savings,” said Warman.

He continued by saying that the pandemic significantly decreased resistance to automation.

In the early stages of the pandemic, customers were accustomed to grocery store self-checkout lanes and relied on mobile apps to make their food orders.

Ball State University assistant professor Dina Zemke studies consumer perceptions of restaurant automation.

Customers were sick of restaurants’ limited hours and slow service, Zemke noted, because of a labor shortage.

In a third-quarter Technomic study, 22% of the owners of more than 500 restaurants said they were investing in equipment that would eliminate the need for kitchen staff.

19% of households also started using labor-saving technologies for ordering.

Read also: TikTok receives ban on government devices


Although there are benefits to automation, it is still uncertain whether there will be any cost savings.

McDonald’s tested order-taking technology for drive-thrus years ago after acquiring the AI startup Apprente.

Months after announcing the test, the fast food giant sold the unit to IBM as part of a collaboration to improve the technology.

In over twenty Illinois test branches, the voice-ordering program’s accuracy was only 80%, falling short of the targeted 95% accuracy.

During an earnings call this summer, McDonald’s CEO Chris Kempczinski discussed automation.

“The idea of robots and all of those things, while it maybe is great for garnering headlines, it’s not practical in the vast majority of restaurants,” said Kempczinski.

“The economics don’t pencil out. You’re not going to see that as a broad-based solution anytime soon.”

However, the potential for automation in trivial tasks is higher.

White Castle vice president Jamie Richardson asserted that innovations like Coca-Cola Freestyle machines had a bigger impact on sales.

“Sometimes the bigger automation investments we make aren’t as earth shattering,” said Richardson.


Why restaurant chains are investing in robots and what it means for workers

Lensa AI terms and conditions prove to be dangerous

Lensa AI: As the year draws to a close, the topic of artificial intelligence in art has become a hot topic.

Humans have also used artificial intelligence applications to transform themselves.

Lensa AI is one of the new AI selfie apps that are fast-growing.

However, people have warned users and pointed out scary details in the terms and conditions.

The news

Most people have gotten used to registering online without reading the terms and conditions properly.

Lensa AI, an app that lets users create AI-generated selfies of 10 to 20 images, is among the apps with skimmed terms and conditions.

Users who upload their photos will receive over 50 AI-generated selfies in various forms.

However, social media highlighted the terms and conditions, saying the deal allows Lensa AI to distribute and use its footage without compensation.

In addition, downloading user content signifies the user’s acceptance of the company license, which allows the app to use their photos.

Read also: Meta threatens to remove news content on FB

Terms and conditions

Lensa AI’s terms and conditions reads:

“Lensa may allow you to upload, edit, store and share content, including photos and videos.”

“We do not claim ownership over your User Content,” it continues.

“However, in order to provide you with the services (e.g., to make our services accessible to you), we seek a legal permission from you to use your User content.”

“You give us such a permission by means of a Company License.”

Users then have to grant Lensa AI a long-term license so that they can perform the following:

  • Use
  • Reproduce
  • Modify
  • Distribute
  • Create derivative works

If they agree, users will not receive any additional compensation.

Furthermore, they are subject to their additional explicit content for use where required by applicable law in their privacy policy (corporate license).

Read also: TikTok a security concern according to FBI

Company License

The Lensa AI company license is intended to allow Lensa to operate and improve its current and potential products.

Users who agree lets Lensai’s AI train in the Magic Avatars feature user application.

“The Company License terminates when you delete the User Content from Lensa’s Library or by terminating your account.”

However, Lensa AI’s terms also state that they revoke the license by contacting the company’s email address. (


People are warning others about using new AI selfie app due to scary detail

Apple reportedly plans to bring changes to Siri

Apple has made a lot of innovations in 2022 so far, and they’re about to make another change with the Siri feature.

The tech giant is reportedly planning to ditch “Hey.”

The report

Apple is allegedly training Siri, its voice assistant, to take commands without saying the first half of the phrase “Hey Siri.”

The activation phase launches Siri on Apple products like the iPhone, iPad, HomePod, and Apple Watch.

According to Bloomberg, the move could take place in 2023 or 2024.

Read also: Apple to see iPhone 14 models shipment setback


While the update is minor, experts believe it is a sign that more changes are underway and that in-depth training in artificial intelligence will be required.

Lian Jye Su, research director at ABI Research, said the system recognizes requests more accurately from two keywords.

The transition to using a word would rely on a more advanced artificial intelligence system.

“During the recognition phase, the system compares the voice command to the user-trained model,” explained Su.

“‘Siri’ is much shorter than ‘Hey Siri,’ giving the system potentially less comparison points and higher error rate in an echo-y, large room and noisy environments.”

The move

Apple’s change would allow them to catch up with Amazon’s “Alexa” prompt, which doesn’t require an initial voice assistant activation word.

In 2018, Microsoft moved away from “Hey Cortana” so that users need only say “Cortana” on smart speakers.

However, for product inquiries from Google, users must still use the phrase “OK Google.”

The “Hey Siri” change comes at a time when Apple, Amazon and Google are working together on the Matter automation standard.

The Matter automation standard allows automation and Internet of Things devices from different vendors to collaborate.

James Sanders, the chief analyst at market research firm CCS Insight, says Apple’s priority is likely to redouble efforts to improve Siri functionality.

Read also: Apple continues positive streak amid inflation


Apple’s voice assistant has been active since February 2010, over twelve years ago.

It started as a standalone app on Apple’s App Store before the tech giant bought it two months later.

Apple then integrated Siri into the iPhone 4S.

It introduced the ability to say “Hey Siri” without using the home button in 2014.

Over the years, Siri has gotten smarter by integrating third-party developers like carpooling and payment apps.

It also supports follow-up questions, multiple languages, and other access.

Despite the improvements, Siri still has problems, including user misunderstandings and wrong answers.

“While the ‘Hey Siri’ change requires a considerable amount of work, it would be surprising if Apple announced only this change to Siri,” said Sanders.

“Considering the rumored timing, I would anticipate this change to be bundled with other new or improved functionality for Siri, perhaps alongside a new model of HomePod and integrations with other smart home products via Matter, as a reintroduction to Apple’s voice assistant.”


Why Apple may be working on a ‘hey Siri’ change

An AI-generated piece from Midjourney sparks the wrath of the art community after winning an art competition

While AI has been of great use in multiple industries, it became a source of controversy when a man won an AI-generated image in an art contest.

Jason M. Allen, 39, was the winner when he submitted an image that won the digital art/digitally manipulated photography category at the Colorado State Fair last week.

The work submitted

According to CNN, Allen was nervous about participating, but his decision earned him the win and the $ 300 prize.

The picture “Théâtre D’opéra Spatial,” which translates to Space Opera Theater, shows a scene seemingly taken from a science fiction fantasy concept and shows women in Victorian dress looking into a portal.

Jason M. Allen

Jason M. Allen, the man who sent the AI-generated image, is the founder and lead developer of the board game company Incarnate Games.

Based in Pueblo West, Colorado, Allen decided to take the opportunity of the competition.

“I’m fascinated by this imagery,” said Allen. “I love it. And I think everyone should see it.”

The tool that developed the image

While Jason M. Allen was delighted to win first prize on his first attempt at a competition, it turned out that he was using Midjourney.

Midjourney is a Discord-based AI art generator that has been making the rounds online recently.

It’s one of many AI image generators, like Google Research’s Imagen and OpenAI’s DALL-E 2.

The process

Although Allen didn’t go through the same processes as most artists, he said the image didn’t come easily.

He said there was a lot of work to be done.

Jason M. Allen said it took 80 hours to complete the job, playing with the phrases to create images of women wearing Victorian dresses and space helmets.

Allen developed over 900 iterations of the images sent by the women before disinfecting them in Photoshop.

He then ran the images through Gigapixel AI to improve the resolution and then printed the piece on canvas.

Reaction to the win

Allen’s use of AI to complete the work sparked outrage over the price, with many saying it took away the hard work people put into creating physically real works of art.

“Jason Allen, you are NOT an artist. You have never used actual tools. Just texts,” one Twitter user wrote.

“Midjourney can be fun, but it should never be used to cheat other artists.”

“That’s ridiculous,” another wrote to the Colorado State Fair’s tweet. “Makes your whole art competition less prestigious to the point where it’s laughable.”

“This is the literal definition of ‘pressed a few buttons to make a digital art piece’,” a user tweeted. “AI artwork is the ‘banana taped to the wall’ of the digital world now.”

Meanwhile, Jason M. Allen remains unmoved at the front desk, saying:

“I’m not going to apologize for it… I won and I didn’t break any rules.”

Despite his victory, Allen agrees with others that AI-generated art should be placed in a separate category in the future.

“I’m okay with that, there’s no problem with that,” said Allen. “But someone had to be first.”


AI won an art contest, and artists are furious

AI-generated art won a fine arts competition – and artists are up in arms

Controversy erupts over prize awarded to AI-generated art