AI This Week: Do Shoppers Really Want Amazon GenAI? | Tech Crunch

Spread the love


It's great to keep up with an industry as fast-moving as AI. So until AI can do it for you, here's a handy roundup of the latest stories in the world of machine learning, along with notable research and experiments we didn't cover on their own.

This week, Amazon announced Rufus, an AI-powered shopping assistant trained on the e-commerce giant's product catalog as well as information on the web. Rufus lives in Amazon's mobile app, helping to find products, conducting product comparisons, and getting recommendations on what to buy.

From extensive research at the beginning of the shopping journey 'What to consider when buying running shoes?' 'What are the differences between trail and road running shoes?' For comparisons like … Rufus meaningfully improves how easy it is for customers to discover and find the best products to meet their needs,” Amazon wrote in a blog post.

That's all good. But my question is who is yearning for this? Really?

I'm not convinced that GenAI, especially in chatbot form, is a technology that the average person cares about or thinks about. Surveys support me in this regard. Last August, the Pew Research Center found that only 26% of people in the US who had heard of OpenAI's GenAI chatbot ChatGPT (18% of adults) had tried it. Use varied by age, with a higher percentage of young adults (under 50) reporting use than older adults. But the fact is, most people don't know — or care — about using what's arguably the most popular GenAI product out there.

GenAI has its well-publicized problems, including its tendency to fabricate facts, infringe copyrights, and be biased and toxic. Amazon's previous attempt at a GenAI chatbot, Amazon Q, struggled — revealing confidential information on its first day of release. But I'd argue that GenAI's biggest problem now — at least from a consumer perspective — is that there are some universally compelling reasons to use it.

Certainly, a GenAI like Rufus can help with specific, narrow tasks such as contextual shopping (e.g. finding clothes for winter), comparing product categories (e.g. the difference between lip gloss and oil) and making top recommendations (e.g. gifts for Valentine's Day). But does it meet the needs of most shoppers? Not according to a recent poll from ecommerce software startup Namogoo.

Namogoo, which asked hundreds of consumers about their needs and frustrations when it comes to online shopping, found that product images are an important contributor to a good ecommerce experience, along with product reviews and descriptions. Respondents ranked search as fourth-most important and “simple navigation” as fifth; Second to last is remembering preferences, information and shopping history.

What this means is that people usually shop with one product in mind; That quest was an afterthought. Maybe Rufus can shake up the equation. I don't think so, especially if it's a rocky rollout (and given the reception to Amazon's other GenAI shopping experiments) — but I suppose stranger things have happened.

Here are some other noteworthy AI stories from the past few days:

  • Google Maps experiments with GenAI: Google Maps is introducing a GenAI feature to help you discover new places. By leveraging large linguistic models (LLMs), this feature analyzes over 250 million locations on Google Maps and provides input from over 300 million local guides to get suggestions based on what you're looking for.
  • GenAI tools for music and more: In other Google news, the tech giant released GenAI tools for creating music, lyrics, and images, and brought one of the more capable LLMs, Gemini Pro, to its Bard chatbot users worldwide.
  • New open AI models: The Allen Institute for AI, a nonprofit AI research organization founded by the late Microsoft co-founder Paul Allen, has released several GenAI language models that it claims are more “open” than others — and, crucially, licensed so developers can use them. There are also no barriers to train, experiment and commercialize them.
  • FCC moves to ban AI-generated calls: The FCC is proposing to make the use of voice cloning tech in robocalls illegal in principle, making it easier to charge operators involved in these scams.
  • Shopify Releases Image Editor: Shopify is releasing a GenAI media editor to enhance product images. Traders can choose a type from seven styles or type a prompt to create a new background.
  • GPTs, implemented by: OpenAI is encouraging the adoption of GPTs and third-party apps powered by its AI models by launching ChatGPT. Users to call them in any chat. Paid users of ChatGPT can bring GPTs into the conversation by typing “@” and selecting a GPT from the list.
  • OpenAI Partners with Common Sense: In an unrelated announcement, OpenAI said it is teaming up with Common Sense Media, a nonprofit organization that reviews and ranks the suitability of various media and tech for kids, to collaborate on AI guidelines and educational materials for parents, educators and young people.
  • Autonomous Browsing: The browser company that makes Arc Browser is looking to build an AI that will surf the web for you and get you results, bypassing search engines, Evan writes.

More machine learning

Does AI know what is “normal” or “typical” for a given situation, medium or utterance? In a way, large language samples are uniquely suited to identify which patterns are more like other patterns in their datasets. And indeed Yale researchers found in their investigation of whether an AI can detect the “distinctiveness” of one subject among a group of others. For example, given 100 romance novels, which of the model stores about that genre are most and which are least “typical”?

Interestingly (and disappointingly), Professors Balázs Kovács and Gaël Le Mens worked for years on their own model, a BERT variant, and just as they were about to publish it, ChatGPT came out and in many ways duplicated exactly what they were doing. “You can cry,” Le Menz said in a news release. But the good news is that both the new AI and their older, tuned model suggest that, in fact, this type of system can identify the idiosyncratic and atypical in a dataset, which can be helpful down the line. Both point out that while ChatGPT supports their thesis in practice, its closed nature makes it difficult to work scientifically.

Scientists at the University of Pennsylvania are looking at another odd concept to quantify: common sense. By asking thousands of people to rate statements, they found out how “common” things like “you get what you give” or “don't eat food after its expiration date” are. Surprisingly, although patterns emerge, there are “few beliefs identified at the group level.”

“Our findings suggest that each person's idea of ​​common sense may be uniquely their own, making the concept less common than previously thought,” said co-lead author Mark Whiting. Why is this in the AI ​​newsletter? Because like pretty much everything else, as “simple” as common sense turns out to be, which one might expect AI to eventually be, it's not that simple! But by quantifying it this way, researchers and auditors can tell how much common sense an AI has, or what groups and biases it fits into.

Speaking of biases, most large language models are very loose with the information they take in, meaning that if you give them the right prompt, they may respond offensively, incorrectly, or both. Latimer is a startup that aims to change that with a model that aims to be more inclusive by design.

While there aren't many details about their approach, Latimer says their model uses retrieval augmented generation (which is supposed to improve responses) and unique licensed content and data collected from many cultures not typically represented in these databases. So when you ask about something, the model doesn't go back to a 19th century monograph to give you an answer. We'll know more about the model when Latimer releases more information.

Image Credits: Purdue / Bedrich Benes

One thing an AI model can certainly do is grow trees. fake trees Researchers at Purdue's Institute for Digital Forestry (where I want to work, call me) have created a super-compact model that realistically simulates the growth of a tree. This is one of those problems that sounds simple, but isn't; You can simulate the growth of a working tree if you're making a game or a movie, but what about serious scientific work? “Although AI has become intuitively widespread, it has been largely successful in modeling 3D geometries unrelated to nature,” said lead author Bedrich Benes.

Their new model is only a megabyte, too small for an AI system. But DNA is even smaller and denser, and it encodes the entire tree, from bud to bud. The model still works in abstraction – it's not an exact simulation of nature – but it shows that the complexities of tree growth can be encoded in a relatively simple model.

Finally, a robot from Cambridge University researchers that can read braille faster than a human with 90% accuracy. why are you asking Of course, it can't be used for the blind – the team decided it would be an interesting and easily quantifiable task to test the sensitivity and speed of robotic fingers. If the Braille can be read by zooming in, that's a good sign! You can read more about this interesting approach here. Or watch the video below:



Source link

Leave a Comment