{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kccfk0mHZ4Ad"
      },
      "source": [
        "# Migrating to LiteLLM Proxy from OpenAI/Azure OpenAI\n",
        "\n",
        "Covers:\n",
        "\n",
        "*   /chat/completion\n",
        "*   /embedding\n",
        "\n",
        "\n",
        "These are **selected examples**. LiteLLM Proxy is **OpenAI-Compatible**, it works with any project that calls OpenAI. Just change the `base_url`, `api_key` and `model`.\n",
        "\n",
        "For more examples, [go here](https://docs.litellm.ai/docs/proxy/user_keys)\n",
        "\n",
        "To pass provider-specific args, [go here](https://docs.litellm.ai/docs/completion/provider_specific_params#proxy-usage)\n",
        "\n",
        "To drop unsupported params (E.g. frequency_penalty for bedrock with librechat), [go here](https://docs.litellm.ai/docs/completion/drop_params#openai-proxy-usage)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nmSClzCPaGH6"
      },
      "source": [
        "## /chat/completion\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_vqcjwOVaKpO"
      },
      "source": [
        "### OpenAI Python SDK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "x1e_Ok3KZzeP"
      },
      "outputs": [],
      "source": [
        "import openai\n",
        "client = openai.OpenAI(\n",
        "    api_key=\"anything\",\n",
        "    base_url=\"http://0.0.0.0:4000\"\n",
        ")\n",
        "\n",
        "# request sent to model set on litellm proxy, `litellm --model`\n",
        "response = client.chat.completions.create(\n",
        "    model=\"gpt-3.5-turbo\",\n",
        "    messages = [\n",
        "        {\n",
        "            \"role\": \"user\",\n",
        "            \"content\": \"this is a test request, write a short poem\"\n",
        "        }\n",
        "    ],\n",
        "    extra_body={ # pass in any provider-specific param, if not supported by openai, https://docs.litellm.ai/docs/completion/input#provider-specific-params\n",
        "        \"metadata\": { # 👈 use for logging additional params (e.g. to langfuse)\n",
        "            \"generation_name\": \"ishaan-generation-openai-client\",\n",
        "            \"generation_id\": \"openai-client-gen-id22\",\n",
        "            \"trace_id\": \"openai-client-trace-id22\",\n",
        "            \"trace_user_id\": \"openai-client-user-id2\"\n",
        "        }\n",
        "    }\n",
        ")\n",
        "\n",
        "print(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AqkyKk9Scxgj"
      },
      "source": [
        "## Function Calling"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wDg10VqLczE1"
      },
      "outputs": [],
      "source": [
        "from openai import OpenAI\n",
        "client = OpenAI(\n",
        "    api_key=\"sk-1234\", # [OPTIONAL] set if you set one on proxy, else set \"\"\n",
        "    base_url=\"http://0.0.0.0:4000\",\n",
        ")\n",
        "\n",
        "tools = [\n",
        "  {\n",
        "    \"type\": \"function\",\n",
        "    \"function\": {\n",
        "      \"name\": \"get_current_weather\",\n",
        "      \"description\": \"Get the current weather in a given location\",\n",
        "      \"parameters\": {\n",
        "        \"type\": \"object\",\n",
        "        \"properties\": {\n",
        "          \"location\": {\n",
        "            \"type\": \"string\",\n",
        "            \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
        "          },\n",
        "          \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
        "        },\n",
        "        \"required\": [\"location\"],\n",
        "      },\n",
        "    }\n",
        "  }\n",
        "]\n",
        "messages = [{\"role\": \"user\", \"content\": \"What's the weather like in Boston today?\"}]\n",
        "completion = client.chat.completions.create(\n",
        "  model=\"gpt-4o\", # use 'model_name' from config.yaml\n",
        "  messages=messages,\n",
        "  tools=tools,\n",
        "  tool_choice=\"auto\"\n",
        ")\n",
        "\n",
        "print(completion)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YYoxLloSaNWW"
      },
      "source": [
        "### Azure OpenAI Python SDK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yA1XcgowaSRy"
      },
      "outputs": [],
      "source": [
        "import openai\n",
        "client = openai.AzureOpenAI(\n",
        "    api_key=\"anything\",\n",
        "    base_url=\"http://0.0.0.0:4000\"\n",
        ")\n",
        "\n",
        "# request sent to model set on litellm proxy, `litellm --model`\n",
        "response = client.chat.completions.create(\n",
        "    model=\"gpt-3.5-turbo\",\n",
        "    messages = [\n",
        "        {\n",
        "            \"role\": \"user\",\n",
        "            \"content\": \"this is a test request, write a short poem\"\n",
        "        }\n",
        "    ],\n",
        "    extra_body={ # pass in any provider-specific param, if not supported by openai, https://docs.litellm.ai/docs/completion/input#provider-specific-params\n",
        "        \"metadata\": { # 👈 use for logging additional params (e.g. to langfuse)\n",
        "            \"generation_name\": \"ishaan-generation-openai-client\",\n",
        "            \"generation_id\": \"openai-client-gen-id22\",\n",
        "            \"trace_id\": \"openai-client-trace-id22\",\n",
        "            \"trace_user_id\": \"openai-client-user-id2\"\n",
        "        }\n",
        "    }\n",
        ")\n",
        "\n",
        "print(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yl9qhDvnaTpL"
      },
      "source": [
        "### Langchain Python"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5MUZgSquaW5t"
      },
      "outputs": [],
      "source": [
        "from langchain.chat_models import ChatOpenAI\n",
        "from langchain.prompts.chat import (\n",
        "    ChatPromptTemplate,\n",
        "    HumanMessagePromptTemplate,\n",
        "    SystemMessagePromptTemplate,\n",
        ")\n",
        "from langchain.schema import HumanMessage, SystemMessage\n",
        "import os\n",
        "\n",
        "os.environ[\"OPENAI_API_KEY\"] = \"anything\"\n",
        "\n",
        "chat = ChatOpenAI(\n",
        "    openai_api_base=\"http://0.0.0.0:4000\",\n",
        "    model = \"gpt-3.5-turbo\",\n",
        "    temperature=0.1,\n",
        "    extra_body={\n",
        "        \"metadata\": {\n",
        "            \"generation_name\": \"ishaan-generation-langchain-client\",\n",
        "            \"generation_id\": \"langchain-client-gen-id22\",\n",
        "            \"trace_id\": \"langchain-client-trace-id22\",\n",
        "            \"trace_user_id\": \"langchain-client-user-id2\"\n",
        "        }\n",
        "    }\n",
        ")\n",
        "\n",
        "messages = [\n",
        "    SystemMessage(\n",
        "        content=\"You are a helpful assistant that im using to make a test request to.\"\n",
        "    ),\n",
        "    HumanMessage(\n",
        "        content=\"test from litellm. tell me why it's amazing in 1 sentence\"\n",
        "    ),\n",
        "]\n",
        "response = chat(messages)\n",
        "\n",
        "print(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "B9eMgnULbRaz"
      },
      "source": [
        "### Curl"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VWCCk5PFcmhS"
      },
      "source": [
        "\n",
        "\n",
        "```\n",
        "curl -X POST 'http://0.0.0.0:4000/chat/completions' \\\n",
        "    -H 'Content-Type: application/json' \\\n",
        "    -d '{\n",
        "    \"model\": \"gpt-3.5-turbo\",\n",
        "    \"messages\": [\n",
        "        {\n",
        "        \"role\": \"user\",\n",
        "        \"content\": \"what llm are you\"\n",
        "        }\n",
        "    ],\n",
        "    \"metadata\": {\n",
        "        \"generation_name\": \"ishaan-test-generation\",\n",
        "        \"generation_id\": \"gen-id22\",\n",
        "        \"trace_id\": \"trace-id22\",\n",
        "        \"trace_user_id\": \"user-id2\"\n",
        "    }\n",
        "}'\n",
        "```\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "drBAm2e1b6xe"
      },
      "source": [
        "### LlamaIndex"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d0bZcv8fb9mL"
      },
      "outputs": [],
      "source": [
        "import os, dotenv\n",
        "\n",
        "from llama_index.llms import AzureOpenAI\n",
        "from llama_index.embeddings import AzureOpenAIEmbedding\n",
        "from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext\n",
        "\n",
        "llm = AzureOpenAI(\n",
        "    engine=\"azure-gpt-3.5\",               # model_name on litellm proxy\n",
        "    temperature=0.0,\n",
        "    azure_endpoint=\"http://0.0.0.0:4000\", # litellm proxy endpoint\n",
        "    api_key=\"sk-1234\",                    # litellm proxy API Key\n",
        "    api_version=\"2023-07-01-preview\",\n",
        ")\n",
        "\n",
        "embed_model = AzureOpenAIEmbedding(\n",
        "    deployment_name=\"azure-embedding-model\",\n",
        "    azure_endpoint=\"http://0.0.0.0:4000\",\n",
        "    api_key=\"sk-1234\",\n",
        "    api_version=\"2023-07-01-preview\",\n",
        ")\n",
        "\n",
        "\n",
        "documents = SimpleDirectoryReader(\"llama_index_data\").load_data()\n",
        "service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)\n",
        "index = VectorStoreIndex.from_documents(documents, service_context=service_context)\n",
        "\n",
        "query_engine = index.as_query_engine()\n",
        "response = query_engine.query(\"What did the author do growing up?\")\n",
        "print(response)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xypvNdHnb-Yy"
      },
      "source": [
        "### Langchain JS"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "R55mK2vCcBN2"
      },
      "outputs": [],
      "source": [
        "import { ChatOpenAI } from \"@langchain/openai\";\n",
        "\n",
        "\n",
        "const model = new ChatOpenAI({\n",
        "  modelName: \"gpt-4\",\n",
        "  openAIApiKey: \"sk-1234\",\n",
        "  modelKwargs: {\"metadata\": \"hello world\"} // 👈 PASS Additional params here\n",
        "}, {\n",
        "  basePath: \"http://0.0.0.0:4000\",\n",
        "});\n",
        "\n",
        "const message = await model.invoke(\"Hi there!\");\n",
        "\n",
        "console.log(message);\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nC4bLifCcCiW"
      },
      "source": [
        "### OpenAI JS"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "MICH8kIMcFpg"
      },
      "outputs": [],
      "source": [
        "const { OpenAI } = require('openai');\n",
        "\n",
        "const openai = new OpenAI({\n",
        "  apiKey: \"sk-1234\", // This is the default and can be omitted\n",
        "  baseURL: \"http://0.0.0.0:4000\"\n",
        "});\n",
        "\n",
        "async function main() {\n",
        "  const chatCompletion = await openai.chat.completions.create({\n",
        "    messages: [{ role: 'user', content: 'Say this is a test' }],\n",
        "    model: 'gpt-3.5-turbo',\n",
        "  }, {\"metadata\": {\n",
        "            \"generation_name\": \"ishaan-generation-openaijs-client\",\n",
        "            \"generation_id\": \"openaijs-client-gen-id22\",\n",
        "            \"trace_id\": \"openaijs-client-trace-id22\",\n",
        "            \"trace_user_id\": \"openaijs-client-user-id2\"\n",
        "        }});\n",
        "}\n",
        "\n",
        "main();\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "D1Q07pEAcGTb"
      },
      "source": [
        "### Anthropic SDK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qBjFcAvgcI3t"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "from anthropic import Anthropic\n",
        "\n",
        "client = Anthropic(\n",
        "    base_url=\"http://localhost:4000\", # proxy endpoint\n",
        "    api_key=\"sk-test-proxy-key-123\", # litellm proxy virtual key (example)\n",
        ")\n",
        "\n",
        "message = client.messages.create(\n",
        "    max_tokens=1024,\n",
        "    messages=[\n",
        "        {\n",
        "            \"role\": \"user\",\n",
        "            \"content\": \"Hello, Claude\",\n",
        "        }\n",
        "    ],\n",
        "    model=\"claude-3-opus-20240229\",\n",
        ")\n",
        "print(message.content)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dFAR4AJGcONI"
      },
      "source": [
        "## /embeddings"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lgNoM281cRzR"
      },
      "source": [
        "### OpenAI Python SDK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NY3DJhPfcQhA"
      },
      "outputs": [],
      "source": [
        "import openai\n",
        "from openai import OpenAI\n",
        "\n",
        "# set base_url to your proxy server\n",
        "# set api_key to send to proxy server\n",
        "client = OpenAI(api_key=\"<proxy-api-key>\", base_url=\"http://0.0.0.0:4000\")\n",
        "\n",
        "response = client.embeddings.create(\n",
        "    input=[\"hello from litellm\"],\n",
        "    model=\"text-embedding-ada-002\"\n",
        ")\n",
        "\n",
        "print(response)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hmbg-DW6cUZs"
      },
      "source": [
        "### Langchain Embeddings"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "lX2S8Nl1cWVP"
      },
      "outputs": [],
      "source": [
        "from langchain.embeddings import OpenAIEmbeddings\n",
        "\n",
        "embeddings = OpenAIEmbeddings(model=\"sagemaker-embeddings\", openai_api_base=\"http://0.0.0.0:4000\", openai_api_key=\"temp-key\")\n",
        "\n",
        "\n",
        "text = \"This is a test document.\"\n",
        "\n",
        "query_result = embeddings.embed_query(text)\n",
        "\n",
        "print(f\"SAGEMAKER EMBEDDINGS\")\n",
        "print(query_result[:5])\n",
        "\n",
        "embeddings = OpenAIEmbeddings(model=\"bedrock-embeddings\", openai_api_base=\"http://0.0.0.0:4000\", openai_api_key=\"temp-key\")\n",
        "\n",
        "text = \"This is a test document.\"\n",
        "\n",
        "query_result = embeddings.embed_query(text)\n",
        "\n",
        "print(f\"BEDROCK EMBEDDINGS\")\n",
        "print(query_result[:5])\n",
        "\n",
        "embeddings = OpenAIEmbeddings(model=\"bedrock-titan-embeddings\", openai_api_base=\"http://0.0.0.0:4000\", openai_api_key=\"temp-key\")\n",
        "\n",
        "text = \"This is a test document.\"\n",
        "\n",
        "query_result = embeddings.embed_query(text)\n",
        "\n",
        "print(f\"TITAN EMBEDDINGS\")\n",
        "print(query_result[:5])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oqGbWBCQcYfd"
      },
      "source": [
        "### Curl Request"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7rkIMV9LcdwQ"
      },
      "source": [
        "\n",
        "\n",
        "```curl\n",
        "curl -X POST 'http://0.0.0.0:4000/embeddings' \\\n",
        "  -H 'Content-Type: application/json' \\\n",
        "  -d ' {\n",
        "  \"model\": \"text-embedding-ada-002\",\n",
        "  \"input\": [\"write a litellm poem\"]\n",
        "  }'\n",
        "```\n",
        "\n"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
