We’re in the middle of a proverbial AI arms race between three chatbot services backed by three tech giants: Meta’s AI, Open AI’s ChatGPT, and Google’s Gemini. Since ChatGPT opened the floodgates for generative AI and all its potential applications, the competition between these three services has reached a fever pitch.
It’s fascinating to see how much growth occurred in such a short period and ruminate on how these chatbots continue to evolve. For now, let’s evaluate their current progress by measuring each in several categories based on users’ everyday use. We’re looking for help with things like emails, math, recipes, programming, and more.
From there, we’ll see which AI chatbot offers the most thorough and accurate answers while offering sources when applicable. For the purpose of this article, we’re using ChatGPT version 4.0.
Meta AI vs ChatGPT vs Google Gemini: emails
Many professionals have started using AI to aid in more menial work tasks, so I started by asking all three AI chatbots to ‘write me an email for work asking for a project extension.’
Each chatbot was able to generate a well-written email that not only carried out the main objective of the prompt but did so in both a polite and professional manner. They all were also in template style, meaning I could personalize the email with more relevant information.
In the case of email writing, Meta AI, ChatGPT, and Google Gemini all get perfect marks. Of course, this was the easiest prompt to carry out, we’ll get to the real challenges later.
Meta AI vs ChatGPT vs Google Gemini: recipes
For this prompt, I asked the chatbots to “Give me a recipe for chili,” and each one gave me both accurate and thorough recipes (with slight variations), which I determined by comparing them to my knowledge of making chili.
However, there was one major difference between the chatbots, and that involved sourcing the recipe. Both Meta AI and Gemini sourced the recipe at the bottom and even linked to the website used, with the latter even going the extra mile and linking to additional recipes at the bottom.
However, ChatGPT did not source at all – it simply lifted the entire recipe from an unknown site. Was it plagiarizing? Was it making the recipe up? If so, this could be dangerous as AI is far from perfect, and ChatGPT could make a mistake with cooking instructions, putting novice cooks at risk, especially since there’s no way to doublecheck.
In this regard, I’d use Gemini or Meta AI for recipes, as you can trace back the recipe and verify there was a human involved, making it more trustworthy regarding food safety.
Meta AI vs ChatGPT vs Google Gemini: summarize news
I asked each chatbot to ‘Give me a bulleted list of the latest news for [insert current date here]’, and each one was able to do so rather quickly. However, all three copied headlines with little context about the stories themselves. Once again, the difference between the AI chatbots lay in how they sourced news, if at all.
Both ChatGPT and Meta AI directly linked to the news outlets they cited, with the former linking to several sources after each headline it quoted. Meanwhile Gemini mentioned various news sites to get pertinent news from but didn’t link to the pages it sourced.Â
ChatGPT and Meta AI seem to be the best AI chatbots for news, as they actually link to their source rather than lift from an unknown website wholesale without proper citation.
Meta AI vs ChatGPT vs Google Gemini: math
I asked the three chatbots two sets of math problems: one algebra and the other geometry.
‘Determine all possible values of the expression A³ + B³ + C³ — 3ABC where A, B, and C are nonnegative integers’Â
and
‘In triangle ∆ABC, let G be the centroid, and let I be the center of the inscribed circle. Let α and β be the angles at the vertices A and B, respectively. Suppose that the segment IG is parallel to AB and that β = 2 tan^-1 (1/3). Find α.’
In the case of the first question, all three chatbots used three separate methods to solve this problem but all arrived at the same conclusion.Â
The second question seemed to really trip up the chatbots. ChatGPT at first worked through the problem well and nearly had the answer, then never actually posted the final result. Gemini also works through the problem but doesn’t insert any numeric values for the equations, so it arrives at a theoretic answer that’s helpful for generally understanding principles but doesn’t answer the question. Only Meta AI properly answers the problem, giving us a solid answer.
If you’re looking for a chatbot that can solve math problems for you, Meta AI is the best option.
Meta AI vs ChatGPT vs Google Gemini: programming
I gave each AI chatbot the following programming prompt, based on the one used in this excellent piece, asking an older ChatGPT model to do the same:
‘I want to create a variant on the game tic-tac-toe, but I need it to be more complex. So, the grid should be 12-by-12. It should still use “x” and “o”. Rules include that any player can block another by placing their “x” or “o” in any space around the grid, as long as it is in one of the spaces right next to the other player. They can choose to place their “X” or “o” in any space, as well, to block future moves. The goal is to be the first one to have at least six “x” or “o” in any row, column, or diagonal before the other player. Remember, one player is “x” and the other is “o”. Please program this in simple HTML and JavaScript. Let’s call this game: Tic-Tac-Go.’
For this to be considered a success, each chatbot simply needed to provide me with complete code in both HTML and JavaScript.Â
Meta AI and ChatGPT gave me exactly what I requested in both programming languages. Gemini gave me code in JavaScript but then decided to substitute HTML for CSS which, according to Testbook, “HTML provides the structure and content of a web page, while CSS provides the visual design” – so they are not interchangeable.Â
If you’re looking for an AI chatbot that creates solid programming code, then Meta AI and ChatGPT are the go-tos.
Meta AI vs ChatGPT vs Google Gemini: mock interview
The last prompt I tested was: ‘do a mock interview for a role as a computing staff writer at a major online tech publication.’ For this one, each of the chatbots simulated a possible interview between myself and the interviewer, complete with mock questions and answers.
All three approached the fake interview differently, but all ended up with great results. While you would then have to create a more detailed scenario to role play with the bot, these all work well as starting points to better understand how to approach an interview and what would possibly be asked.
Meta AI vs ChatGPT vs Google Gemini: verdict
After tallying up the results, it seems that Meta AI is the best AI chatbot overall. Out of the three, Meta AI has the most consistent results over a wide variety of prompts, making it far more reliable than its competition.
ChatGPT is in the middle, as it also returned decently consistent results. To see how it compared to its older 3.5 model, I ran the same questions through that version and I see a massive improvement between the two. OpenAI is clearly improving its LLM with each update.
Unfortunately, Google’s Gemini came in dead last and seems to be the least consistent AI chatbot of the bunch. This tracks with its very rough start, while it was still called Google Bard, and it’s playing catch-up with the competition to this day.