Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added openai gpt4o mini model #1814

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

meetpateltech
Copy link
Contributor

@meetpateltech meetpateltech commented Jul 18, 2024

blog post


🚀 This description was created by Ellipsis for commit 6256a11

Summary:

Added support for the new OpenAI GPT-4o Mini model, including chat setting limits, model definition, and type updates.

Key points:

  • Added gpt-4o-mini model to CHAT_SETTING_LIMITS in lib/chat-setting-limits.ts with specific temperature, token output length, and context length limits.
  • Added GPT4oMini model definition to lib/models/llm/openai-llm-list.ts with pricing and other details.
  • Updated OpenAILLMID type in types/llms.ts to include gpt-4o-mini.

Generated with ❤️ by ellipsis.dev

Copy link

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me! Reviewed everything up to 6256a11 in 42 seconds

More details
  • Looked at 67 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 drafted comments based on config settings.
1. lib/chat-setting-limits.ts:160
  • Draft comment:
    The addition of gpt-4o-mini with a MAX_TEMPERATURE of 2.0 seems inconsistent with other models of similar capacity. Typically, models in this range have a MAX_TEMPERATURE of 1.0. Please verify if this is intended or if it should be aligned with similar models.
  MAX_TEMPERATURE: 1.0,
  • Reason this comment was not posted:
    Confidence of 0% on close inspection, compared to threshold of 50%.

Workflow ID: wflow_v6463Dz8m4qzggNs


You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

@@ -30,7 +30,8 @@ export async function POST(request: Request) {
temperature: chatSettings.temperature,
max_tokens:
chatSettings.model === "gpt-4-vision-preview" ||
chatSettings.model === "gpt-4o"
chatSettings.model === "gpt-4o" ||
chatSettings.model === "gpt-4o-mini"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mini supports 16k output tokens - would this limit it to 4k?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will.

It should be

const response = await openai.chat.completions.create({
  model: chatSettings.model as ChatCompletionCreateParamsBase["model"],
  messages: messages as ChatCompletionCreateParamsBase["messages"],
  temperature: chatSettings.temperature,
  max_tokens:
    chatSettings.model === "gpt-4-vision-preview" ||
    chatSettings.model === "gpt-4o"
      ? 4096
      : chatSettings.model === "gpt-4o-mini"
      ? 16383
      : null, 
  stream: true
});

Copy link
Contributor Author

@meetpateltech meetpateltech Jul 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's fixed! Thanks.

Copy link

@o-24 o-24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to fix MAX_TOKEN_OUTPUT_LENGTH for gpt-4o mini in lib/chat-setting-limits.ts from 4096 tokens to 16383.
Everything else is good:)

"gpt-4o-mini": {
MIN_TEMPERATURE: 0.0,
MAX_TEMPERATURE: 2.0,
MAX_TOKEN_OUTPUT_LENGTH: 4096,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way MAX_TOKEN_OUTPUT_LENGTH: 4096 will also limit gpt-4o mini maximum response length to 4k tokens, it should be 16383 tokens

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just updated it! thanks 🙌

update output token in gpt4o mini
@faraday faraday mentioned this pull request Aug 12, 2024
@xingfanxia
Copy link

@mckaywrigley please merge this

@Davitcoin
Copy link

@mckaywrigley

1 similar comment
@haydenkong
Copy link

@mckaywrigley

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants