53

I am currently exploring GitHub Copilot, and I am interested in using it programmatically, i.e., invoking it from code. As I understand, GitHub Copilot is an IDE plugin, which makes me wonder how it can be automated or controlled programmatically. We know Copilot uses OpenAI models behind the scene as an LLM.

GitHub Copilot does not provide API access to control it programmatically.

Clarification:

  • It's important to note that the plugin, once downloaded and installed, completes my code automatically. Copilot used the OpenAI models such as gpt-3.5 or gpt-4 behind the scene. I know very well the OpenAI chat or text completion models.So that's not my question

  • My question is how to capture the top three suggestions provided by Copilot in an automated fashion.

  • For example, for any given autocomplete task to Copilot, the task is to record the code suggestions and save them into a file.

4
  • How do you mean? Once you download it, it auto-complete's your code. Is it not doing that for you? I guess, please expand your question for clarity. Thank you. Commented Jul 25, 2023 at 16:30
  • 1
    Did you look at the vim plugin for copilot at github.com/github/copilot.vim ? This appears to be calling an LSP through a JSON-RPC api. The code isn't exactly easy to understand, but this would be a decent start, I believe. Commented Aug 9, 2023 at 15:35
  • 1
    @Ingo the most promising lines are this and this. I'm really lost in this so sharing to see if anybody can figure this stuff out. :^) Commented Aug 9, 2023 at 15:49
  • @lngo the reference you have given is a community implementation utilizing the Codex/GPT API. My question is, how can I capture the top three suggestions offered by Copilot? Commented Aug 10, 2023 at 14:39

4 Answers 4

34

UPDATE

GitHub released their offical language server for Copilot recently. Now instead of manually extract the dist/ directory from Copilot.vim, you can directly install @github/copilot-language-server via npm and invoke it via node ./node_modules/@github/copilot-language-server/dist/language-server.js --stdio true (or replace spawn("node", ["language-server.js", "--stdio", "true"]) with spawn("node", ["node_modules/@github/copilot-language-server/dist/language-server.js", "--stdio", "true"]) in the following code example). The original answer still applies to the official language server.

UPDATE 2 (2025)

Recently, inspired by microsoft/vscode-copilot-chat and LSP-copilot, I found that the same language server can also be used to invoke GitHub Copilot Chat, so I added a short example showing how to use it for chat requests — see the new section I appended after the original answer.
Note that these APIs are not officially documented, so their names or behaviors may change in the future.

Below is the original answer.


I just figured out how to invoke GitHub Copilot by the language server provided in Copilot.vim. I also referred to several community implementations like copilot.lua and LSP-copilot to understand that.

TL;DR: Copilot.vim invokes GitHub Copilot via a language server, which is contained in the dist/ directory of its repository. Those vimscripts in Copilot.vim repository actually serve as a client to the language server.

The language server itself is written in Node.js, and uses stdio to communicate with the client. You can just download the dist/ directory and run the language server by node dist/language-server.js. However, just running the server by hand is hard to use, so I wrote a simple client in Node.js to invoke the language server and get the results.

This is a minimal example showing how to invoke GitHub Copilot programmatically via the language server extracted from Copilot.vim:

// @ts-check

const { spawn } = require("node:child_process");

const server = spawn("node", ["language-server.js", "--stdio", "true"]);
// use `fork` in `node:child_process` is also OK
// const server = fork("language-server.js", { silent: true, execArgv: ["--stdio", "true"] });
// `{ silent: true }` is to make sure that data sent to stdio will be returned normally

/**
 * Send a LSP message to the server.
 */
const sendMessage = (/** @type {object} */ data) => {
  const dataString = JSON.stringify({ ...data, jsonrpc: "2.0" });
  const contentLength = Buffer.byteLength(dataString, "utf8");
  const rpcString = `Content-Length: ${contentLength}\r\n\r\n${dataString}`;
  server.stdin.write(rpcString);
};

let requestId = 0;
/** @type {Map<number, (payload: object) => void | Promise<void>>} */
const resolveMap = new Map();
/** @type {Map<number, (payload: object) => void | Promise<void>>} */
const rejectMap = new Map();

/**
 * Send a LSP request to the server.
 */
const sendRequest = (/** @type {string} */ method, /** @type {object} */ params) => {
  sendMessage({ id: ++requestId, method, params });
  return new Promise((resolve, reject) => {
    resolveMap.set(requestId, resolve);
    rejectMap.set(requestId, reject);
  });
};
/**
 * Send a LSP notification to the server.
 */
const sendNotification = (/** @type {string} */ method, /** @type {object} */ params) => {
  sendMessage({ method, params });
};

/**
 * Handle received LSP payload.
 */
const handleReceivedPayload = (/** @type {object} */ payload) => {
  if ("id" in payload) {
    if ("result" in payload) {
      const resolve = resolveMap.get(payload.id);
      if (resolve) {
        resolve(payload.result);
        resolveMap.delete(payload.id);
      }
    } else if ("error" in payload) {
      const reject = rejectMap.get(payload.id);
      if (reject) {
        reject(payload.error);
        rejectMap.delete(payload.id);
      }
    }
  }
};

server.stdout.on("data", (data) => {
  /** @type {string} */
  const rawString = data.toString("utf-8");
  const payloadStrings = rawString.split(/Content-Length: \d+\r\n\r\n/).filter((s) => s);

  for (const payloadString of payloadStrings) {
    /** @type {Record<string, unknown>} */
    let payload;
    try {
      payload = JSON.parse(payloadString);
    } catch (e) {
      console.error(`Unable to parse payload: ${payloadString}`, e);
      continue;
    }

    handleReceivedPayload(payload);
  }
});

const wait = (/** @type {number} */ ms) => new Promise((resolve) => setTimeout(resolve, ms));

/* Main */
const main = async () => {
  // Wait for server to start
  await wait(1000);

  // Send `initialize` request
  await sendRequest("initialize", {
    capabilities: { workspace: { workspaceFolders: true } },
    initializationOptions: {
      editorInfo: {
        name: "your editor",
        version: "1.0.0",
      },
      editorPluginInfo: {
        name: "GitHub Copilot for your editor",
        version: "1.0.0",
      },
    },
  });
  // Send `initialized` notification
  sendNotification("initialized", {});

  // Send `textDocument/didOpen` notification
  sendNotification("textDocument/didOpen", {
    textDocument: {
      uri: "file:///home/fakeuser/my-project/test.py",
      languageId: "python",
      version: 0, // The change count (i.e. version) of the document
      text: "def hello():\n" + "    print('hello, w",
    },
  });

  // Send `getCompletions` request to get completions at line 1, character 19
  // (i.e. after `print('hello, w`)
  const completions = await sendRequest("getCompletions", {
    doc: {
      version: 0, // Should be the same as the latest version of the document
      position: { line: 1, character: 19 },
      uri: "file:///home/fakeuser/my-project/test.py",
    },
  });
  console.log("Completions:", completions);
};

void main();

This example assumes you’ve already logged in to GitHub Copilot via something also using the language server like Copilot.vim or LSP-copilot. You can view relevant implementations in LSP-copilot to see how to automate the login process programmatically (The signInInitiate request and signInConfirm request).

Then you can see something like this in your console:

Results in console

If you’re not quite familiar with LSP, you can refer to LSP specification to understand the meaning of each request and notification. Requests like initialize and textDocument/didOpen here are defined in the LSP specification, while getCompletions is a custom request defined by the GitHub Copilot language server. You can also use textDocument/didChange or textDocument/didClose to tell the language server that the document has been changed or closed (don’t forget to update the version field on each change).

Some other requests like getCompletionsCycling can provide more completions, you can also view Copilot.vim or relevant community implementations I mentioned above to see how to use them.

It should also be possible to write a Python client to invoke the language server, but the language server itself is written in Node.js, so it’s easier to write a Node.js client to invoke it.


Additional Notes (2025)

The following code demonstrates how to use the same language server to call GitHub Copilot Chat. The implementation is inspired by microsoft/vscode-copilot-chat and LSP-copilot.
Since Node.js (v22+) has native TypeScript support, the script below is written in TypeScript and can be run directly with node client.ts.

// client.ts

import { spawn } from "node:child_process";

const server = spawn("node", [
  "node_modules/@github/copilot-language-server/dist/language-server.js",
  "--stdio",
  "true",
]);

/**
 * Send a LSP message to the server.
 */
const sendMessage = (data: object) => {
  const dataString = JSON.stringify({ ...data, jsonrpc: "2.0" });
  const contentLength = Buffer.byteLength(dataString, "utf8");
  const rpcString = `Content-Length: ${contentLength}\r\n\r\n${dataString}`;
  server.stdin.write(rpcString);
};

let requestId = 0;
const resolveMap = new Map<number, (payload: unknown) => void>();
const rejectMap = new Map<number, (payload: object) => void>();

const onBeginMap = new Map<
  number | string,
  (payload: any) => void | Promise<void>
>();
const onReportMap = new Map<
  number | string,
  (payload: any) => void | Promise<void>
>();
const onEndMap = new Map<
  number | string,
  (payload: any) => void | Promise<void>
>();

/**
 * Send a LSP request to the server.
 */
const sendRequest: {
  <P extends object>(
    method: string,
    params: { workDoneToken: number | string } & P,
    handlers: {
      onBegin?: NonNullable<ReturnType<(typeof onBeginMap)["get"]>>;
      onReport?: NonNullable<ReturnType<(typeof onReportMap)["get"]>>;
      onEnd?: NonNullable<ReturnType<(typeof onEndMap)["get"]>>;
    },
  ): Promise<unknown>;
  (method: string, params: object): Promise<unknown>;
} = (method: string, params: object, handlers?: any): any => {
  sendMessage({ id: ++requestId, method, params });
  if (
    "workDoneToken" in params &&
    (typeof params.workDoneToken === "number" ||
      typeof params.workDoneToken === "string")
  ) {
    if (handlers?.onBegin)
      onBeginMap.set(params.workDoneToken, handlers.onBegin);
    if (handlers?.onReport)
      onReportMap.set(params.workDoneToken, handlers.onReport);
    if (handlers?.onEnd) onEndMap.set(params.workDoneToken, handlers.onEnd);
  }
  return new Promise((resolve, reject) => {
    resolveMap.set(requestId, resolve);
    rejectMap.set(requestId, reject);
  });
};
/**
 * Send a LSP notification to the server.
 */
const sendNotification = (method: string, params: object) => {
  sendMessage({ method, params });
};

const requestHandlers: Record<
  string,
  (
    params: any,
    succeed: (result: unknown) => void,
    fail: (error: unknown) => void,
  ) => void
> = {};
const notificationHandlers: Record<string, (params: any) => void> = {
  "$/progress": (params: {
    token: string;
    value: { kind: "begin" | "report" | "end" };
  }) => {
    const { kind, ...rest } = params.value;
    switch (kind) {
      case "begin":
        const onBegin = onBeginMap.get(params.token);
        if (onBegin) {
          onBegin(rest as any);
        } else {
          console.warn(`Unhandled progress begin for ${params.token}:`, params);
        }
        break;
      case "report":
        const onReport = onReportMap.get(params.token);
        if (onReport) {
          onReport(rest as any);
        } else {
          console.warn(
            `Unhandled progress report for ${params.token}:`,
            params,
          );
        }
        break;
      case "end":
        const onEnd = onEndMap.get(params.token);
        if (onEnd) {
          onEnd(rest as any);
        } else {
          console.warn(`Unhandled progress end for ${params.token}:`, params);
        }
        break;
      default:
        console.error(`Unknown progress kind: ${kind}`);
        break;
    }
  },

  "window/logMessage": (params: { type: number; message: string }) => {
    switch (params.type) {
      case 1:
        console.error(params.message);
        break;
      case 2:
        console.warn(params.message);
        break;
      case 3:
        console.info(params.message);
        break;
      case 4:
        console.log(params.message);
        break;
      case 5:
        console.debug(params.message);
        break;
      default:
        throw new Error(`Unknown log type: ${params.type}`);
    }
  },
};

/**
 * Register a request handler.
 */
const onRequest = (
  method: string,
  handler: (typeof requestHandlers)[keyof typeof requestHandlers],
) => {
  requestHandlers[method] = handler;
};
/**
 * Register a notification handler.
 */
const onNotification = (
  method: string,
  handler: (typeof notificationHandlers)[keyof typeof notificationHandlers],
) => {
  notificationHandlers[method] = handler;
};

/**
 * Handle received LSP payload.
 */
const handleReceivedPayload = (payload: object) => {
  // Response
  if ("id" in payload && typeof payload.id === "number") {
    if ("result" in payload) {
      const resolve = resolveMap.get(payload.id);
      if (resolve) {
        resolve(payload.result);
        resolveMap.delete(payload.id);
        return;
      }
    } else if (
      "error" in payload &&
      typeof payload.error === "object" &&
      payload.error !== null
    ) {
      const reject = rejectMap.get(payload.id);
      if (reject) {
        reject(payload.error);
        rejectMap.delete(payload.id);
        return;
      }
    }
  }

  // Request from server
  if (
    "id" in payload &&
    typeof payload.id === "number" &&
    "method" in payload &&
    typeof payload.method === "string" &&
    "params" in payload &&
    typeof payload.params === "object" &&
    payload.params !== null
  ) {
    const handler = requestHandlers[payload.method];
    if (handler) {
      handler(
        payload.params,
        (result) => {
          sendMessage({ id: payload.id, result });
        },
        (error) => {
          sendMessage({ id: payload.id, error });
        },
      );
    } else {
      console.warn(`Unhandled ${payload.method} request:`, payload.params);
    }
    return;
  }

  // Notification from server
  if (
    "method" in payload &&
    typeof payload.method === "string" &&
    "params" in payload &&
    typeof payload.params === "object" &&
    payload.params !== null
  ) {
    const handler = notificationHandlers[payload.method];
    if (handler) {
      handler(payload.params);
    } else {
      console.warn(`Unhandled ${payload.method} notification`, payload.params);
    }
    return;
  }

  console.error(`Unhandled payload:`, payload);
};

let buffer = Buffer.alloc(0);
server.stdout.on("data", (chunk: Buffer) => {
  buffer = Buffer.concat([buffer, chunk]);

  while (true) {
    const headerEnd = buffer.indexOf("\r\n\r\n");
    if (headerEnd === -1) break;

    const header = buffer.subarray(0, headerEnd).toString("utf8");
    // Parse headers (case-insensitive), allow extra headers like Content-Type
    let contentLength = -1;
    for (const line of header.split("\r\n")) {
      const idx = line.indexOf(":");
      if (idx === -1) continue;
      const key = line.slice(0, idx).trim().toLowerCase();
      const value = line.slice(idx + 1).trim();
      if (key === "content-length") {
        const n = parseInt(value, 10);
        if (Number.isFinite(n) && n >= 0) contentLength = n;
      }
    }
    if (contentLength < 0) {
      // Malformed header, drop until after current header block and continue
      buffer = buffer.subarray(headerEnd + 4);
      continue;
    }

    const total = headerEnd + 4 + contentLength;
    if (buffer.length < total) break; // Wait for more bytes

    const body = buffer.subarray(headerEnd + 4, total).toString("utf8");
    buffer = buffer.subarray(total);

    let payload: any;
    try {
      payload = JSON.parse(body);
    } catch (e) {
      console.error("Unable to parse payload body:", body, e);
      continue;
    }

    handleReceivedPayload(payload);
  }
});

const wait = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms));

/* Main */
onRequest(
  "conversation/context",
  (
    _params: { conversationId: string; turnId: string; skillId: string },
    succeed,
  ) => {
    // Not sure what to do here,
    // just return an empty array to make the server happy
    succeed([]);
  },
);

const main = async () => {
  await wait(1000); // Wait for server to start

  await sendRequest("initialize", {
    capabilities: { workspace: { workspaceFolders: true } },
    initializationOptions: {
      editorInfo: {
        name: "your editor",
        version: "1.0.0",
      },
      editorPluginInfo: {
        name: "GitHub Copilot for your editor",
        version: "1.0.0",
      },
    },
  });
  sendNotification("initialized", {});

  // This `conversation/preconditions` request is optional,
  // you can use its response to check if chat is enabled
  await sendRequest("conversation/preconditions", {});

  const INITIAL_PROMPT = "You are a helpful assistant.";

  await sendRequest(
    "conversation/create",
    {
      turns: [{ request: INITIAL_PROMPT }],
      capabilities: { allSkills: true, skills: [] },
      // Add a readable prefix for better debugging
      workDoneToken: `conversation/create:${crypto.randomUUID()}`,
      computeSuggestions: true,
      source: "panel",
    },
    {
      onBegin: (payload: { conversationId: string }) => {
        console.log(`Conversation created with ID: ${payload.conversationId}`);
      },
      onReport: async (payload: { conversationId: string }) => {
        await sendRequest(
          "conversation/turn",
          {
            conversationId: payload.conversationId,
            message: "Can you tell me how to use GitHub Copilot effectively?",
            workDoneToken: crypto.randomUUID(),
          },
          {
            onBegin: (payload: { conversationId: string }) => {
              console.log(`Turn started: ${payload.conversationId}`);
            },
            onReport: (payload: {
              steps?: { title: string; status: string }[];
              reply?: string;
            }) => {
              if (payload.steps) {
                for (const { title, status } of payload.steps)
                  console.log(`${title}: ${status}`);
              } else if (payload.reply) {
                console.log(payload.reply);
              } else {
                console.log(payload);
              }
            },
          },
        );
      },
    },
  );
};

await main();

After running the example, you should see something like the following output in your console.

[lsp] GitHub Copilot Language Server 1.389.0 initialized
[default] Policy watcher started for GitHub Copilot Plugin
Unhandled policy/didChange notification { 'mcp.contributionPoint.enabled': true }
[certificates] Removed 92 expired certificates
Unhandled statusNotification notification { busy: false, kind: 'Normal', status: 'Normal', message: '' }
Unhandled didChangeStatus notification { busy: false, kind: 'Normal' }
[CopilotMCP] MCP state changed from false to true
[CopilotMCP] Allowlist feature disabled for this build, allowing all servers without validation
Unhandled copilot/mcpTools notification { servers: [] }
Unhandled conversation/preconditionsNotification notification {
  results: [
    { type: 'token', status: 'failed' },
    { type: 'chat_enabled', status: 'ok' }
  ],
  status: 'failed'
}
Unhandled featureFlagsNotification notification {
  rt: true,
  sn: false,
  chat: true,
  ic: true,
  pc: true,
  ae: {},
  agent_as_default: false,
  byok: true,
  data_migration_completed: false
}
Conversation created with ID: 4343170f-fa9d-4475-b9a2-cd848736c947
Turn started: 4343170f-fa9d-4475-b9a2-cd848736c947
Collecting context: running
[fetchChat] Request 5948f972-1b02-43b2-add1-6ac0d63d91e1 at <https://api.individual.githubcopilot.com/chat/completions> finished with 200 status after 371.4863999999998ms
[streamMessages] message 0 returned. finish reason: [stop]
Reading git information: running
Reading git information: completed
Collecting context: completed
Generating response: running
[fetchChat] Request a40a0fa0-f21b-491f-bba0-b8f45e193528 at <https://api.individual.githubcopilot.com/chat/completions> finished with 200 status after 464.26569999999947ms
[streamMessages] message 0 returned. finish reason: [stop]
Reading git information: running
Reading git information: completed
Collecting context: completed
Generating response: running
[fetchChat] Request 04c32345-6a48-42ab-b615-81a66847f78b at <https://api.individual.githubcopilot.com/chat/completions> finished with 200 status after 421.66039999999975ms
To use GitHub Copilot effectively:


[fetchChat] Request 91d84b65-aece-4ad4-bdd4-b0e4efd71ea5 at <https://api.individual.githubcopilot.com/chat/completions> finished with 200 status after 482.1976999999997ms
To use GitHub Copilot effectively:


1. **Write Descriptive Comments**: Add clear comments to describe the functionality you want. Copilot uses these to generate relevant code.


1. **Write Descriptive Comments**: Add clear comments to describe the functionality you want. Copilot uses these to generate relevant code.


2. **Start with Function/Method Names**: Begin typing a function or method name, and Copilot will suggest implementations based on the name.


2. **Start Small**: Begin with small, specific tasks. Copilot works best when the context is focused.


3. **Iterate on Suggestions**: If the first suggestion isn't ideal, use the keyboard shortcut `Alt + ]` (Windows) to cycle through alternative suggestions.


3. **Review Suggestions**: Always review the code Copilot generates to ensure it meets your requirements and is secure.


4. **Use Context**: Provide enough context in your code (e.g., function names, variable names) to guide Copilot's suggestions.


4. **Provide Context**: Ensure your code file has enough context (e.g., imports, existing functions) for Copilot to make accurate predictions.


5. **Use Inline Suggestions**: Accept inline suggestions with `Tab` or dismiss them with `Esc`.


5. **Iterate**: If the suggestion isn't perfect, tweak your code or comments and let Copilot try again.


6. **Keyboard Shortcuts**:

6. **Leverage Documentation**: Use Copilot to generate boilerplate code or documentation by typing a brief description.


   - `Tab`: Accept a suggestion.

   - `Ctrl` + `]`: Cycle through suggestions.

   - `Esc`: Dismiss suggestions.


7. **Experiment with Prompts**: Try different ways of phrasing your comments or code to get better results.


7. **Leverage Documentation**: Use Copilot for boilerplate code, repetitive tasks, or exploring unfamiliar libraries.


8. **Review Carefully**: Always review the generated code for correctness, security, and adherence to your project’s standards.


8. **Combine with Testing**: Pair Copilot with unit tests to verify the generated code.


9. **Stay Updated**: Keep your Copilot extension updated for the latest features and improvements.
[streamMessages] message 0 returned. finish reason: [stop]
9. **Use in Supported Languages**: Copilot works best with popular languages like Python, JavaScript, TypeScript, Java, and C#.


10. **Enable/Disable as Needed**: Use the Copilot settings to enable or disable it for specific files or projects.
[streamMessages] message 0 returned. finish reason: [stop]
[fetchChat] Request 67718536-9bf6-47ea-940a-7fa3b82e2943 at <https://api.individual.githubcopilot.com/chat/completions> finished with 200 status after 389.3296ms
[fetchChat] Request 690083fb-78ac-4a16-a314-9e013754d25b at <https://api.individual.githubcopilot.com/chat/completions> finished with 200 status after 409.9220000000005ms
[streamMessages] message 0 returned. finish reason: [stop]
Generating response: completed
Unhandled progress end for 5db27e1a-8b12-4197-86f2-addbcd454227: {
  token: '5db27e1a-8b12-4197-86f2-addbcd454227',
  value: {
    kind: 'end',
    conversationId: '4343170f-fa9d-4475-b9a2-cd848736c947',
    turnId: '2e2774b0-eab7-479d-9c0c-c43ee7c645e1',
    followUp: {
      message: 'What are some common mistakes to avoid when using GitHub Copilot?',
      id: '55c09eb9-05e5-42c0-9c13-55b7bbc57c25',
      type: 'Follow-up from model'
    },
    suggestedTitle: 'Effective Use of GitHub Copilot',
    skillResolutions: [ [Object], [Object], [Object], [Object] ],
    updatedDocuments: []
  }
}
[streamMessages] message 0 returned. finish reason: [stop]
[chat] Work done token for conversation 4343170f-fa9d-4475-b9a2-cd848736c947 is already done, last updated at 1761933763958
[chat] Work done token for conversation 4343170f-fa9d-4475-b9a2-cd848736c947 is already done, last updated at 1761933763958
Sign up to request clarification or add additional context in comments.

5 Comments

do you know how to send custom prompts to Copilot as it is done in the chat window?
It seems you’re asking how to invoke GitHub Copilot Chat programmatically and send custom prompts (like system messages) as you would in the chat UI. The GitHub Copilot Language Server is designed mainly for inline (tab) completions, not conversational interactions. However, you can send custom prompts to Copilot Chat through its (undocumented) Web API, which supports messages of different roles (e.g., system, user, assistant). You can take a look at CopilotChat.nvim, which wraps that API and lets you specify system messages.
If you’re not familiar with the Lua implementation in CopilotChat.nvim, you can also check out an open-source project I wrote, which adapts its logic and invokes Copilot Chat via Node.js.
Thanks for the clarification. I also checked LSP-copilot line 37-52 from your original post and found that it also has chat integration. Isn't this done via the language server SDK?
That’s interesting! It looks like I missed that part over the past year. I just tried reproducing how to call GitHub Copilot Chat using the official language server, and I’ve updated my answer accordingly—it now includes a short example showing how to send chat requests through it.
24

TLDR: I made an (unofficial) API: https://github.com/B00TK1D/copilot-api

I was inspired by @Snowflyt at how simple their API interface was, so I reverse engineered the Copilot vim plugin and associated API. I matched the way they do OAuth (normal Github Apps aren't allowed Copilot access, so I just use Copilot's App ID). This way, you don't have to rely on installing vim and the plugin and going through the setup process.

The repo linked includes a full self-hosted solution, that you can then call from whatever other functionality you want. The first time you start it, you'll have to complete OAuth (enter provided code into link), and then after that it will automatically refresh all the auth tokens. There are some more advanced features of Copilot that I haven't added yet, but it currently supports basic code completion prompting.

To answer exactly the question you asked - if you want to record the prompts and top responses from the Copilot plugin as you're using it, you can just wrap the agent.js file in the plugin with a logger, such as by using the following commands (assuming linux, and that you have the Vim copilot plugin installed):

  1. mv ~/.config/nvim/pack/github/start/copilot.vim/dist/agent.js ~/.config/nvim/pack/github/start/copilot.vim/dist/agent.orig.js
  2. Edit the file ~/.config/nvim/pack/github/start/copilot.vim/dist/agent.js, and paste in the following code:
const fs = require('fs');
const { spawn } = require('child_process');

const inLogStream = fs.createWriteStream('copilot-prompts.log', { flags: 'a' });
const outLogStream = fs.createWriteStream('copilot-suggestions.log', { flags: 'a' });

// Replace the path with the absolute path for agent.js
const agentScriptPath = '/root/.config/nvim/pack/github/start/copilot.vim/dist/agent.orig.js';

// Spawn a new process running agent.js with the absolute path
const agentProcess = spawn('node', [agentScriptPath]);

// Pipe stdin from the main script to the new process and log it
process.stdin.pipe(inLogStream);
process.stdin.pipe(agentProcess.stdin);

// Pipe stdout from the new process back to the main script's stdout and log it
agentProcess.stdout.pipe(outLogStream);
agentProcess.stdout.pipe(process.stdout);

// Handle process exit
agentProcess.on('exit', (code, signal) => {
  console.log(`Agent process exited with code ${code} and signal ${signal}`);
  inLogStream.end();
  outLogStream.end();
  process.exit();
});

// Handle errors
agentProcess.on('error', (err) => {
  console.error(`Error in agent process: ${err.message}`);
  inLogStream.end();
  outLogStream.end();
  process.exit(1);
});

// Handle main script stdin end
process.stdin.on('end', () => {
  // Close the stdin stream for the spawned process when main script stdin ends
  agentProcess.stdin.end();
});

// Handle main script exit
process.on('exit', () => {
  // Kill the spawned process when the main script exits
  agentProcess.kill();
});

// Handle main script termination
process.on('SIGINT', () => {
  // Handle Ctrl+C to gracefully terminate both the main script and the spawned process
  process.exit();
});
  1. Use copilot in vim as desired to generate logs.
  2. View the prompts and suggestions in ~/copilot-prompts.log and ~/copilot-suggestions.log.

The output of these logs requires some parsing because they use JSON-RPC, but I'll let you decide exactly how you want to implement that.

1 Comment

Thanks for this breakdown! And for your additional repositories like freegpt... The chat/completions is what I was looking for. And your blog page explains the story realy well.
2

GitHub, doesn't publish their APIs publicly, as of yet.

However, it was speculated that GitHub Copilot used OpenAI's Codex (which is now deprecated). According to this, you can use OpenAI's chat models for code completion, suggestion, etc. Though from my experience, the response time varies. Also, there's no guarante that it will output only code.

Check example below;

import os
import openai

openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {
     "role": "system",
     "content": "You are a helpful assistant. Assistant will output only and only code as a response."
    },
    {
      "role": "user",
      "content": "Write a Python function that takes as input a file path to an image, loads the image into memory as a numpy array, then crops the rows and columns around the perimeter if they are darker than a threshold value. Use the mean value of rows and columns to decide if they should be marked for deletion."
    }
  ],
  temperature=0,
  max_tokens=1024
)

Which will output;

import numpy as np
from PIL import Image

def crop_dark_borders(image_path, threshold):
    # Load the image
    image = Image.open(image_path)
    # Convert the image to a numpy array
    image_array = np.array(image)
    
    # Calculate the mean value of each row and column
    row_means = np.mean(image_array, axis=1)
    col_means = np.mean(image_array, axis=0)

    ...

Edit; On a second thought, I wouldn't use ChatCompletion for this because the task is not chat based at all. Instead I would use Completion and supply the whole code file to it as an input. This has it's own limitations too. For example you wouldn't be able to provide to the model what's after the cursor.

8 Comments

However, this method doesn't capture the code completion suggestions that GitHub Copilot provided. The code snippet here merely demonstrates how to invoke OpenAI APIs. My requirement is to record the real-time suggestions provided by the GitHub Copilot plugin.
@Exploring Have you tried running your IDE's requests through a proxy, so that all the copilot API requests are logged? From there you can write your own API wrapper which will get the autocompletions from GitHub.
@Mave, was doing just that. Altough that might get you banned pretty quick, the top suggestions are not invoked on every key stroke I think.
@Exploring, your requirements are still unclear. Invoking the extension, invoking the API, record the real-time suggestions, these are all different tasks. I'm currently investigating how the API can be accessed, is that okay?
Did some digging. Capturing packets was a no go for me, couldn't decrypt TLS. So I checked neovim's Copilot support. They mentioned to install Nodejs, that was odd. Then I've found an custom plugin LSP-Copilot which requieres you to install LSP which uses JSON-RPC to communicate with the LSP-Copilot (and it's the same thing for neovim too). Skimmed through the source code for LSP-Copilot and it seems promising. Sharing this because somebody also might help.
|
2

It is just lsp. Look at what intellij does to get an idea for java based lsp client. Its fairly simple. Set a breakpoint in VSCodeJRPC in sendData() and you can track all json messages that are transmitted. Then you understand the message sequence and can embed it in your own code

1 Comment

What do you mean by setting "a breakpoint in VSCodeJRPC in sendData()"? Can we debug the IntelliJ environment from VS Code or something? Thanks.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.