Skip to content

fix: use LM Studio native API endpoint (/api/v1) instead of OpenAI-compatible (/v1)#12224

Draft
roomote-v0[bot] wants to merge 1 commit intomainfrom
fix/lmstudio-native-api-endpoint
Draft

fix: use LM Studio native API endpoint (/api/v1) instead of OpenAI-compatible (/v1)#12224
roomote-v0[bot] wants to merge 1 commit intomainfrom
fix/lmstudio-native-api-endpoint

Conversation

@roomote-v0
Copy link
Copy Markdown
Contributor

@roomote-v0 roomote-v0 Bot commented Apr 29, 2026

Related GitHub Issue

Closes: #12223

Description

This PR attempts to address Issue #12223. The LM Studio provider was hardcoding the OpenAI-compatible /v1 path for all API requests, even though the user selected "LM Studio" as the provider. This meant requests went to /v1/chat/completions and /v1/models instead of the native LM Studio endpoints at /api/v1/chat/completions and /api/v1/models.

The native /api/v1 endpoint (introduced in LM Studio 0.4.0) provides features like stateful chats, prompt caching, and context length management that are not available through the OpenAI-compatible endpoint.

Changes made:

  • src/api/providers/lm-studio.ts: Changed OpenAI client baseURL from /v1 to /api/v1, and model listing endpoint from /v1/models to /api/v1/models
  • src/api/providers/fetchers/lmstudio.ts: Changed connection test and model fetching endpoints from /v1/models to /api/v1/models
  • Updated all corresponding test assertions to expect the new /api/v1 paths

Feedback and guidance are welcome.

Test Procedure

  • All existing LM Studio tests updated and passing (24/24)
  • Run: cd src && npx vitest run api/providers/__tests__/lmstudio.spec.ts api/providers/__tests__/lm-studio-timeout.spec.ts api/providers/fetchers/__tests__/lmstudio.test.ts
  • Manual verification: With LM Studio running locally, selecting "LM Studio" as provider should now send requests to /api/v1/* endpoints instead of /v1/*

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes (if applicable).
  • Documentation Impact: No documentation updates required -- this is an internal endpoint path change.
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Documentation Updates

  • No documentation updates are required.

Additional Notes

The native LM Studio API at /api/v1 uses the same response format as the OpenAI-compatible endpoint, so the OpenAI SDK client continues to work without any parsing changes. The key benefit is that the native endpoint enables LM Studio-specific features like stateful chats and prompt caching.

Interactively review PR in Roo Code Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] non riesco ad usare le EPI native di LM studio /api/v1/.....

1 participant