Add new model_provider flag for compression to enable request
compression. We support zstd and gzip, server also supports brotli
You can test this against the sign in with chatgpt flow by adding the
following profile:
```
[profiles.compressed]
name = "compressed"
model_provider = "openai-zstd"
[model_providers.openai-zstd]
name = "OpenAI (ChatGPT, zstd)"
wire_api = "responses"
request_compression = "zstd"
requires_openai_auth = true
```
This will zstd compress your request before sending it to the server.