Let's Experiment with DALL-E 3 and Llama 3!

Jonathan Mast from White Beard Strategies updated me this morning about recent developments with Meta AI, which prompted me to rerun an experiment I originally conducted in January 2024. 🤖
At that time, I had used the same prompt across three platforms: OpenAI and Bing, which utilize DALL-E 3 technology, and Meta AI, powered by Llama 3 technology. 🐝 🐝 🐝

In January, OpenAI precisely delivered what I requested—three bees in a jar—without additional elements. Bing's results included three bees, not inside but outside the jar, along with items related to bee activity, like honey 🍯 and oranges🍊, which I had mentioned as desirable background elements. Meta AI's response, however, featured more than three bees either on or in the jar, and this time, they were proportionately depicted, which was an improvement.
The background settings were also notable. Previously, Meta AI chose a simple backdrop, but today, it selected a complex office environment reminiscent of one Einstein might prefer, complete with a light bulb—likely because bees are positively phototactic and are attracted to light. 💡

A consistent detail in all results this time was the absence of any brand names on the jars, such as “Ball,” which had appeared previously.

Reflecting on these responses, it's fascinating to consider the implications of these AI tools as they evolve. Why do you think each AI interpreted the prompt so differently? What does this tell us about the way artificial intelligence understands context and creativity? Join the discussion below and share your thoughts or any experiments you’ve conducted. Who knows—your insights might just shape the next experiment! 🤓


hashtag#generativeai hashtag#dalle3 hashtag#creative hashtag#digitalmarketing hashtag#openai hashtag#bing hashtag#metaai

Original from January 2024

Updated image, same prompt — April 2024