Generative AI in Jupyter
Jupyter AI brings generative artificial intelligence to Jupyter notebooks, giving users the power to explain and generate code, fix errors, summarize content, ask questions about their local files, and generate entire notebooks from a natural language prompt. Using its powerful magic commands and chat interface, Jupyter AI connects Jupyter with large language models (LLM) from providers such as AI21, Anthropic, AWS, Cohere, and OpenAI. We use LangChain to support all popular LLMs and providers, giving you access to new models as they are released. LangChain will let Jupyter AI use local models as well. Jupyter AI version 1.0, for JupyterLab 3, and Jupyter AI 2.0, for JupyterLab 4, are now available as free and open source software.
Jupyter AI is designed with responsible AI and data privacy in mind. You can pick which LLM and embedding model best suit your needs. The underlying prompts, chains, and other software are open source, so you can see exactly how your data is being used. Jupyter AI saves metadata about model-generated content in each AI-generated code cell, so you and your collaborators can track where AI-generated code enters your workflow. Finally, Jupyter AI only contacts an LLM when you ask it to, directly; it does not read your data or transmit it to models without your explicit consent.
Jupyter AI is an official subproject of Project Jupyter and is available now as free, open source software. We designed it based on the same principles that underlie all of Project Jupyter: we made it simple, easy to use, modular, and extensible, and we prioritized ethical considerations and social responsibility. We’d love to hear from you about how we can improve it!
Getting started with Jupyter AI
Start using Jupyter AI by installing the appropriate version with pip
:
pip install 'jupyter-ai>=1.0,<2.0' # If you use JupyterLab 3
pip install jupyter-ai # If you use JupyterLab 4
Then, launch JupyterLab. Jupyter AI provides two different interfaces to interact with LLMs. In JupyterLab, you can converse with a chat UI to assist you with your code. Also, in any supported notebook or IPython environment, including JupyterLab, Notebook, IPython, Colab, and Visual Studio Code, you can invoke LLMs using the %%ai
magic command. Jupyter AI can turn any Jupyter Notebook session into a generative AI playground with support for text and image models.
Project Jupyter is vendor-neutral, so Jupyter AI supports LLMs from AI21, Anthropic, AWS, Cohere, HuggingFace Hub, and OpenAI. More model providers will be added in the future. Please review a provider’s privacy policy and pricing model before you use it. We’re also working on support for locally-deployed models, for maximum privacy. Once you have installed Jupyter AI, before you can use magic commands, you will need to authenticate to each model provider that you wish to use. For most providers, this involves setting an environment variable. The user documentation has detailed instructions for configuring model providers.
The chat interface has its own configuration panel for choosing a language model and an embedding model, and for authenticating to each model’s provider. A language model responds to users’ messages in the chat panel. When you ask the chat interface to learn about local files, it uses an embedding model to parse these files and to assist when you ask questions about them.
You can find full details about how to configure and use Jupyter AI in the user documentation.
The chat interface, your AI assistant
The chat interface puts you in conversation with Jupyternaut, a conversational agent using a language model of your choice.
Jupyternaut communicates primarily through text, and it can also interact with files in JupyterLab. It can answer questions as a general-purpose AI assistant, include selections from your notebooks with your questions, insert AI-generated output into your notebooks, learn from and ask questions about your local files, and generate notebooks from a prompt. Jupyternaut can only see the information you send it by sending chat commands; it only reads your data when you specifically ask it to.
To get started, you can ask Jupyternaut a question:
You can also highlight part of your notebook and include it with your prompt.
Using prompts that include the selected code, you can ask Jupyternaut to explain your code in plain English (or in any other language it can speak), make modifications to it, and identify errors in it. If you want, Jupyternaut can even replace your selection with its response. Please review AI-generated code before you run it, as you would review code written by another person.
For example, you can ask Jupyternaut to rewrite code by adding comments to it:
Jupyternaut sends the code to your chosen language model, then replaces the selection with the language model’s response.
Generating a notebook from a text prompt
Jupyter AI’s chat interface can generate an entire notebook from a text prompt. To do this, run the /generate command and provide a text description. Jupyternaut will use its AI language model to name the workbook and fill it with markdown and code cells. This may take a few minutes. While Jupyternaut is working, you can continue to use the chat UI, and Jupyternaut will continue generating your notebook.
Once Jupyternaut has finished generating your notebook, it will send you a message with its filename, so that you can open it. Please review any AI-driven code before you run it.
Learning from and asking about local files
You can use the /learn
command to teach Jupyternaut about local files, so that you can use the /ask
command to ask questions about them. For example, using the /learn
command, you can teach Jupyternaut about Jupyter AI’s documentation:
When you learn local files, Jupyternaut uses an embedding model to convert data, then stores the output in a local vector database. Please review the privacy policy for each model, and be aware of any restrictions about sharing your local data with third-party model providers. Once the learning process is complete, you can ask a question with the /ask
command. Using retrieval-augmented generation (RAG), Jupyternaut will append relevant info to your question from its vector database, then it will use the AI language model you selected to answer your question.
Notebooks as generative AI playgrounds with magic commands
Jupyter AI also provides magic commands that you can run in notebook cells and in the IPython command-line interface. To get started, run %load_ext jupyter_ai_magics
, which will load the magics extension. You can then use Jupyter AI with the %%ai
magic command. You can run %ai help
to learn about all the options and commands you can run using the %ai
line magic and %%ai
cell magic commands.
Each %%ai
command requires a model, typically specified as provider‑id:model‑id
. To use a particular provider, you’ll need to set its API key using an appropriate environment variable or Python module. See the model providers section of the user documentation for specific information. Once you’ve provided the key to your model provider, you can run a magic command by specifying the model on the first line and specifying your prompt on subsequent lines.
You can use the -f
or --format
parameter to customize the format of the output, including HTML, math, source code, and images.
You can interpolate a variable name or expression in a prompt by enclosing it in braces (curly brackets).
Interpolation also works with the special In
and Out
variables, which contain the inputs and outputs of code cells. Note that a cell output can contain both text and markdown values.
Jupyter AI adds a special Err
variable, which stores the errors that occur while executing code. By interpolating this variable into a prompt, you can use an AI language model to explain and correct an error in your code.
About the developers
Jupyter AI is an officially supported Jupyter subproject. The following Jupyter contributors built Jupyter AI.
How can you help?
We’re just getting started with generative AI in Jupyter. Please join us! We have a list of issues that could use your help, such as adding support for locally hosted LLMs.
- Install and use the Jupyter AI extension. If you find any bugs or have suggestions, please create issues on GitHub.
- Join the discussion about generative AI in Jupyter in the “Generative AI in Jupyter” topic on Discourse.
- Contribute: Your bug reports, feature requests, and pull requests will help improve this project for everyone.
Note: An earlier version of this story said that users can choose which vector database to use with Jupyter AI. As of Jupyter AI 2.1.0, only FAISS is available; users cannot choose another vector database. Sorry for the error.