Categories: FAANG

Gemini models are coming to GitHub Copilot

Today, we’re announcing that GitHub will make Gemini models – starting with Gemini 1.5 Pro – available to developers on its platform for the first time through a new partnership with Google Cloud. Developers value flexibility and control in choosing the best model suited to their needs — and this partnership shows that the next phase of AI code generation will not only be defined by multi-model functionality, but also by multi-model choice.

In the coming weeks, developers using GitHub Copilot will be able to use Gemini 1.5 Pro, which excels in common developer use cases such as code generation, analysis, and optimization.Gemini 1.5 Pro is natively multimodal and features a long context window of up to two million tokens — the longest of any large-scale foundation model — so developers can process more than 100,000 lines of code, suggest helpful modifications, and explain how different parts of the code work.

Developers will soon be able to select Gemini 1.5 Pro in GitHub Copilot’s new model picker to assist in coding related use cases.

Developers will be able to select Gemini 1.5 Pro during conversations with GitHub Copilot Chat on github.com, Visual Studio Code, and with Copilot extensions for Visual Studio in the coming weeks. 

Gemini models support developer experiences across many of the most popular platforms and environments today — via the Gemini API, Google AI Studio and Vertex AI, or through assistance directly in Google Cloud, Workspace, Android Studio, Firebase and Colab. In addition, Google’s own code-assistance tool, Gemini Code Assist, helps developers complete code as they write across popular integrated development environments (IDE) like Visual Studio Code and JetBrains IDEs (like IntelliJ, PyCharm, GoLand, WebStorm, and more).

Read more about our new partnership with GitHub here.

AI Generated Robotic Content

Recent Posts

Update: Distilled v1.1 is live

We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with…

21 hours ago

How to Implement Tool Calling with Gemma 4 and Python

The open-weights model ecosystem shifted recently with the release of the

21 hours ago

Structured Outputs vs. Function Calling: Which Should Your Agent Use?

Language models (LMs), at their core, are text-in and text-out systems.

21 hours ago

Cram Less to Fit More: Training Data Pruning Improves Memorization of Facts

This paper was accepted at the Workshop on Navigating and Addressing Data Problems for Foundation…

21 hours ago

How to build effective reward functions with AWS Lambda for Amazon Nova model customization

Building effective reward functions can help you customize Amazon Nova models to your specific needs,…

21 hours ago

How to find the sweet spot between cost and performance

At Google Cloud, we often see customers asking themselves: "How can we manage our generative…

21 hours ago