You’re building an application with the ChatGPT API when suddenly your requests start failing. Error codes like 400, 401, 429, or 500 appear in your console, and your integration stops working. Understanding what these error codes mean and how to fix them is essential for any developer working with OpenAI’s API. For a broader overview of every ChatGPT error type including non-API issues, see the complete ChatGPT troubleshooting guide.
This guide breaks down every common ChatGPT API error code and shows you exactly how to resolve each one.
Understanding HTTP Status Codes
ChatGPT API uses standard HTTP status codes to communicate what went wrong with your request. The first digit tells you the general category:
4xx errors mean something is wrong with your request. You sent invalid data, used incorrect authentication, or violated usage limits. These are client-side errors you need to fix in your code.
5xx errors indicate server-side problems at OpenAI. The request itself might be valid, but OpenAI’s systems couldn’t process it due to internal issues. These typically require waiting for OpenAI to resolve them.
Error 400: Bad Request
This error means your request is malformed or contains invalid parameters. The server received your request but couldn’t understand it.
Common causes:
Invalid JSON syntax in your request body. Missing required fields like model or messages. Incorrect data types for parameters like sending temperature as a string instead of a number. Malformed message format not following the required structure. Using unsupported model names or deprecated parameters.
How to fix it:
Validate your JSON before sending. Use a JSON linter to catch syntax errors like missing commas or mismatched brackets. Check that all required fields are present and spelled correctly.
Verify parameter data types. Temperature, max_tokens, and similar parameters must be numbers, not strings. If pulling values from environment variables, convert them to the correct type.
Review the API documentation to ensure you’re using current parameter names and formats. OpenAI occasionally updates their API, and old code using deprecated parameters will fail.
Test with minimal examples. Start with the simplest possible request that should work, then add complexity gradually to identify exactly what causes the error.
Example of correct request format:
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "Hello"}
],
"temperature": 0.7
}
Error 401: Unauthorized
This error indicates authentication failure. The API doesn’t recognize your credentials or they’re invalid.
Common causes:
Missing API key in the request headers. Incorrect API key format or typos. Expired or revoked API key. API key not properly passed to the authorization header. Using the wrong authentication method.
How to fix it:
Verify your API key is correct. Log into your OpenAI account and check that the key you’re using matches exactly, including any hyphens or special characters.
Check the authorization header format. It should be exactly: Authorization: Bearer YOUR_API_KEY. Note the space after “Bearer” and ensure there are no extra spaces or characters.
Generate a new API key if yours might be compromised or expired. Go to the OpenAI dashboard, create a fresh key, and replace the old one in your code.
Never expose API keys in client-side code. Keys should only exist in server-side code or secure environment variables. If your key was exposed publicly, revoke it immediately and create a new one.
Ensure environment variables are loading correctly. If storing your key in environment files, verify they’re being read properly during runtime.
Error 403: Forbidden
This error means you’re authenticated but don’t have permission to access the requested resource.
Common causes:
Your account doesn’t have access to the model you’re requesting. Attempting to use features not available in your subscription tier. Geographic restrictions preventing access from your location. Account suspension or restrictions due to policy violations.
How to fix it:
Verify which models your account can access. Check the OpenAI dashboard to see your available models and rate limits.
Ensure your payment method is valid and your account is in good standing. Expired payment methods or outstanding balances can restrict API access.
Check for service availability in your region. Some models or features might have geographic restrictions.
Contact OpenAI support if you believe you should have access but are getting 403 errors. They can investigate account-specific restrictions.
Error 429: Too Many Requests
This indicates you’ve exceeded rate limits. You’re sending requests too quickly or have used up your quota. Web interface users hitting the same error can find additional solutions in the full too many requests error fix guide.
Common causes:
Exceeding requests per minute limit for your tier. Hitting token usage quotas. Too many concurrent requests. Sending requests in tight loops without proper delays.
How to fix it:
Implement rate limiting in your code. Track how many requests you’re making and add delays between them to stay under limits.
Use exponential backoff for retries. When you get a 429 error, wait before retrying. Start with a one-second delay, then double it with each subsequent failure.
Check your current rate limits in the OpenAI dashboard. Different subscription tiers have different limits. You might need to upgrade if you consistently hit limits.
Batch requests when possible. Instead of making many small requests, combine multiple queries into single requests where appropriate.
Monitor your token usage. Both input and output tokens count toward quotas. Optimize prompts to reduce token consumption.
Example retry logic:
let delay = 1000;
for (let i = 0; i < 5; i++) {
try {
const response = await makeAPICall();
return response;
} catch (error) {
if (error.status === 429) {
await sleep(delay);
delay *= 2;
} else {
throw error;
}
}
}
Error 500: Internal Server Error
This indicates something went wrong on OpenAI’s servers while processing your request.
Common causes:
Temporary server glitches or overload. Bugs in OpenAI’s systems. Infrastructure issues. Model processing failures.
How to fix it:
Retry the request after a brief delay. Most 500 errors are temporary and resolve within seconds or minutes.
Check status.openai.com for any reported incidents. If OpenAI is experiencing widespread issues, you’ll see notifications there.
Implement automatic retry logic for 500 errors. Wait a few seconds and try again, using exponential backoff.
If errors persist for your specific requests but not others, simplify your prompt. Extremely long or complex inputs occasionally trigger processing issues.
Contact support if 500 errors continue for extended periods. Provide the specific request that’s failing and any error details returned.
Error 502: Bad Gateway
This error means OpenAI’s gateway received an invalid response from an upstream server.
Common causes:
Temporary network issues between OpenAI’s systems. Server deployment or maintenance. Infrastructure problems.
How to fix it:
Retry after a short delay. These errors are almost always temporary.
Check the status page for ongoing maintenance or incidents.
If errors persist, the issue is likely on OpenAI’s side. Wait and try again later.
Error 503: Service Unavailable
This indicates OpenAI’s API is temporarily unable to handle your request.
Common causes:
Server overload from high traffic. Scheduled maintenance. Capacity limitations during peak usage.
How to fix it:
Wait and retry. 503 errors typically resolve once server load decreases.
Implement retry logic with longer delays for 503 errors compared to other failures.
Try requesting during off-peak hours if you consistently encounter 503 during busy times.
Upgrade to higher tier subscriptions which often get priority during high-load periods.
Error 504: Gateway Timeout
This means your request took too long to process and the gateway timed out waiting for a response. For timeout issues in the web interface rather than the API, see the ChatGPT timeout error fix guide.
Common causes:
Extremely long prompts requiring extended processing time. Requesting very long outputs with high max_tokens. Server performance issues. Network latency problems.
How to fix it:
Reduce prompt complexity and length. Break large requests into smaller pieces.
Lower your max_tokens parameter to request shorter responses.
Check your network connection quality. Poor connectivity can contribute to timeouts.
Retry the request. Timeout issues are often temporary.
Best Practices for Error Handling
Implementing robust error handling prevents small issues from breaking your application.
Always catch and handle errors. Never let API calls fail silently. Log errors for debugging and monitoring.
Implement retry logic with exponential backoff. Automatically retry failed requests with increasing delays between attempts.
Distinguish between retriable and non-retriable errors. Retry 429, 500, 502, 503, and 504 errors. Don’t retry 400, 401, or 403 errors as they won’t resolve without code changes.
Log error details. Capture error codes, messages, and request details to help with debugging.
Set appropriate timeouts. Don’t let requests hang indefinitely. Set reasonable timeout values and handle timeout errors gracefully.
Monitor error rates. Track how often different errors occur. Spikes in error rates indicate problems that need investigation.
Provide user feedback. When errors occur in user-facing applications, show helpful messages rather than technical error codes.
Debugging API Errors
When encountering errors, systematic debugging helps identify root causes quickly.
Verify your request format by testing with curl or Postman first. This isolates whether the issue is in your code or the API itself.
Check the response body. Error responses often include detailed messages explaining exactly what went wrong.
Review recent code changes. If the API worked previously but now fails, recent changes likely introduced the problem.
Test with minimal examples. Strip your request down to the bare minimum that should work, then add complexity back gradually.
Compare with working examples from OpenAI’s documentation. Ensure your code follows the same patterns.
Frequently Asked Questions
Why am I getting 400 errors even though my JSON looks correct?
The most common cause is incorrect data types. Parameters like temperature must be numbers, not strings. If you’re reading values from environment variables or config files, they might be strings that need conversion to numbers.
How do I know what my current rate limits are?
Log into the OpenAI dashboard and check your account settings. Rate limits vary by subscription tier and model. Free tier users have much lower limits than paid subscribers.
Can I increase my rate limits?
Yes. Upgrading to higher subscription tiers increases rate limits. For very high volume needs, contact OpenAI about custom enterprise plans with dedicated rate limits.
What’s the difference between 401 and 403 errors?
401 means authentication failed entirely, usually due to missing or invalid API keys. 403 means you’re authenticated but lack permission for what you’re requesting, like trying to use models not available in your subscription.
Should I retry all API errors automatically?
No. Only retry server errors like 429, 500, 502, 503, and 504. Don’t retry client errors like 400, 401, or 403 because they indicate problems with your request that won’t resolve without code changes.
How long should I wait before retrying after a 429 error?
Start with a one-second delay, then double it with each retry up to a reasonable maximum like 32 seconds. This exponential backoff prevents overwhelming the API while giving rate limits time to reset.
Why do I sometimes get 500 errors for requests that worked before?
500 errors indicate temporary server problems at OpenAI. These can happen randomly due to infrastructure issues, deployments, or unusual load. They’re not caused by your code and typically resolve quickly.
Can network issues cause API errors?
Yes. Poor network connectivity can cause timeout errors, connection failures, and other issues that manifest as API errors. Always verify your network connection when troubleshooting.
What should I do if errors persist for hours?
Check status.openai.com first. If no incident is reported, verify your code hasn’t changed and your account is in good standing. If problems continue, contact OpenAI support with detailed error logs.
Do API errors count against my usage quota?
Failed requests due to server errors typically don’t count against quotas. However, requests that fail due to client errors like 400 or 401 might still consume quota since the server processed them before determining they were invalid.


