Summary:

Google’s Gemini family of AI models has been making waves since its debut, and the latest update, Gemini 1.5, raises the bar yet again. With two new members, Gemini 1.5 Pro and Gemini 1.5 Flash, Google is tackling the limitations of current AI models and pushing into uncharted territory: multimodal understanding across millions of tokens of context.

Source of information

Unprecedented Context: Millions of Tokens

Imagine an AI model that can process almost five days of audio, read the entire “War and Peace” novel, or comprehend 10.5 hours of video in a single request. That’s the power of Gemini 1.5 Pro, a model that can handle up to 10 million context tokens, a leap beyond the current industry standard.

To put it simply, a “token” is a piece of data the AI processes, like a word in a sentence or a frame in a video. Traditionally, AI models could handle only a limited number of these tokens at once.

This ability to process vast amounts of information across modalities (text, audio, video, code) unlocks a whole new realm of possibilities.

Gemini 1.5 Pro: A Refined Powerhouse

Comparison with February of Gemini Families

Gemini 1.5 Pro Core Capabilities Table

Since its initial release in February, Gemini 1.5 Pro has undergone significant pre- and post-training refinement. The result? More than 10% improvement across various capabilities:

 

  • Reasoning: Higher scores on challenging reasoning benchmarks like MATH and GPQA.
  • Multimodal Understanding: Remarkable improvements in understanding images and videos, with new state-of-the-art results on several benchmarks like MathVista, InfographicVQA, and EgoSchema.
  • Efficiency: Despite these performance gains, Gemini 1.5 Pro requires significantly less training compute than its predecessor, Gemini 1.0 Ultra, and is more efficient to use.

Gemini 1.5 Flash: Speed and Efficiency Meet Capability

While Gemini 1.5 Pro is the powerhouse, Gemini 1.5 Flash is designed for speed and efficiency. It boasts the same 2 million+ token context window and multimodal capabilities as Pro but with lower latency, making it perfect for real-time applications and large-scale deployments. This opens the doors for new use cases that were previously infeasible like large-scale data labelling and high-throughput agent serving.

Outperforming the Competition

Gemini 1.5 vs. ChatGPT-4 Turbo Text Haystack

Gemini 1.5 Pro and Flash consistently outperform competing models across a variety of tasks and modalities, including those from OpenAI and Anthropic:

 

  • Long-Context Retrieval: Gemini 1.5 Pro maintains near-perfect recall in “needle-in-a-haystack” tasks across all modalities (text, audio, video) up to 10 million tokens, leaving models like Claude 2.1 and GPT-4 Turbo far behind.
  • Language Learning: Gemini 1.5 Pro can learn to translate from English to Kalamang, a low-resource language with fewer than 200 speakers, solely from a grammar manual provided in context. This showcases its exceptional in-context learning abilities, surpassing other models that rely heavily on pre-training data.
  • Latency: Gemini 1.5 Flash achieves the fastest output generation across multiple languages, outpacing GPT-4 Turbo and Claude 3 models. It even generates text over 30% faster than Claude 3 Haiku for English queries.
APIs character output time difference table

Beyond Benchmarks: Real-World Impact

Gemini 1.5’s long context capabilities are not just theoretical. Google has demonstrated real-world applications:

  • Long-Document QA: Answering complex questions that require understanding relationships between pieces of information spanning an entire book.
  • Long-Context ASR: Transcribing 15-minute videos more accurately than specialised models like Whisper and USM.
  • In-Context Planning: Excelling in planning tasks with fewer examples and longer context, paving the way for more intelligent agents in robotics and other domains.
  • Unstructured Data Analytics: Extracting information from images and organising it into a structured data sheet with higher accuracy than GPT-4 Turbo.

Gemini 1.5 marks a significant milestone in the evolution of AI. However, evaluating these powerful long-context models requires new benchmarks and evaluation methodologies that go beyond traditional methods relying on manual evaluation. Google urges researchers and practitioners to develop more challenging and comprehensive evaluation methods to unlock the full potential of this new generation of AI models.

The Bottom Line

Google’s Gemini 1.5 is not just an incremental update; it’s a true paradigm shift. With its unprecedented context window, multimodal understanding, and impressive efficiency gains, Gemini 1.5 is poised to revolutionise how we interact with information and leverage AI for real-world applications. The race for long-context AI is on, and Google is clearly leading the pack.

Change is afoot, and staying ahead means embracing advanced technology. Gemini 1.5 offers a range of possibilities for your business. Whether it’s enhancing data processing, improving customer interactions, or uncovering new insights through AI, now is a great time to explore how Gemini 1.5 can benefit your operations. Take a closer look at what Gemini can do and see how it might fit into your future plans.

Evan Febrianto
Evan Febrianto

Senior ML Engineer

Schedule a consultation

Embrace the power of secure cloud and AI solutions with Tridorian. Reach out to learn how we can make a difference.

Email opt-in

Share This