API Documentation
Prompting
Understand how the ScholarAI API uses prompting
Within the API
In some endpoints, you may see a key in a response payload labeled hint
or similar. Assuming the API response is being read by an LLM-agent, you should include this entry to help guide the LLM to interpreting the response in the intended way.
For Agents
Within our own GPT and agents, we use a number of mechanisms for prompting agents to call the API in the intended way. Here are our published prompts for developers to implement directly in their agents:
General prompt for entire OpenAPI Spec
For usage inside instructions
parameter of agentic systems, like OpenAI’s Assistant API. Primarily verified against GPT4V.
OpenAPI Spec with descriptions and encoded behavior between endpoints
Can be found at the following link