-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The output is the same as the input. #81
Comments
In addition, I just tried the following method, but the output is still the same as the input. The code as follows:
The output is:
So how do I solve this problem? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have a strange problem. I want to use use the
batch size
data for unified prediction. So I first use thetokenizer.encode()
to encode the data. and I also use thepadding
to control the uniform length. The specific code is shown below:However, the output is the same as the prompt, as follows:
If I do the conversion without
tokenizer.encode
and direct use thetokenizer
to make predict, I get the normal output, which the code looks like this:The output is as follows:
Why does this happen? Is there something important I'm missing
The text was updated successfully, but these errors were encountered: