• Ask a Question
  • Write a Blog Post
  • Login / Sign-up

Product Information

Author's profile photo Mariana Naboka

Tips and Tricks for Working with Annotations in CAP Projects using Language Server Protocol (LSP) capabilities

Tip 1. Adjust snippets verbosity

Annotation%20LSP%20in%20CAP%3A%20omit%20redundant%20types%20in%20snippets

Annotation LSP in CAP: Omit Redundant Types in Snippets

Tip 2. Filter completion lists and pay attention to icons

Annotation%20LSP%3A%20Icon%20Types%20in%20Code%20Completion

Annotation LSP: Icon Types in Code Completion

Tip 3. Resolve diagnostic issues based on severity

Annotation%20LSP%20in%20CAP%3A%20Displaying%20diagnostics%20by%20severity

Annotation LSP in CAP: Display Diagnostics by Severity

Tip 4. Display i18n issues when you need it

Annotation%20LSP%20in%20CAP%3A%20Display%20missing%20i18n%20setting

Annotation LSP in CAP: Display missing i18n default setting

Assigned Tags

Insert/edit link.

Enter the destination URL

Or link to existing content

Participate

Eclipse IDE

Breadcrumbs

Eclipse newsletter.

LSP4E in Eclipse IDE: more for Language (and debug) Servers!

With Eclipse Photon, the Eclipse ecosystem has consolidated its adherence to decoupled development tools, relying on Language Server Protocol and others. The examples of Eclipse projects interacting with the Language Server Protocol are numerous: LSP4J, LSP4E, aCute (C# edition in Eclipse IDE), Corrosion (Rust edition in Eclipse IDE), JDT.LS (Java edition in VSCode and Theia), Xtext, Ceylon, Che, Theia…

We believe that this trend and the rich noteworthy new features of Eclipse LSP4E make it worth an interesting article. Enjoy ;)

Support for Debug Adapter Protocol

LSP4E now supports the Debug Adapter Protocol created for VS Code. Similar to the language server protocol, this now means tool developers may reuse debug adapters developed for the debug protocol and have them automatically integrated with Eclipse debugger functionality.

annotation lsp help you with debugging

Debug Protocol Overview

The debug protocol provides a generic interface based on JSON between the Client (IDE or editor) and the Server (the debugger or an adapter to the debugger).

The protocol consists of:

For more details see the JSON or XTEND version of the protocol.

Integrating a Debug Adapter

LSP4E provides a Debug Adapter launch configuration which allows tool developers to quickly launch their debug extension and test out how it would work in the IDE. Once experiments have been made a launch configuration for the specific debug adapter can be created.

annotation lsp help you with debugging

To try it out, install the Debug Adapter client for Eclipse IDE (Incubation) from the update site into your Eclipse installation. You will also need a debug adapter. You can get one from VS Code (it your ~/.vscode/extensions directory).

Create a Debug Adapter Launcher debug configuration.

For example, the Command is likely to be the path to node and the argument to the extension's JavaScript. In the screenshot above, this is HOME/.vscode/extensions/ms-python.python-VERSION/out/clint.debugger.Main.js for using Python, with HOME and VERSION adapted to your machine.

Fill in the Launch Parameters (Json) with the bit of JSON that your specific launch adapter understands.

To create your own Eclipse plug-in that uses LSP4E’s Debug implementation you have two choices:

The advantage to option 1 is more reuse of some tricky code to launch and trace a debug adapter. The advantage of option 2 is DSPDebugTarget receives a Map and therefore clients of DSPDebugTarget do not have to use/store JSON in their launch configuration.

Using a Debug Adapter

From the end user perspective, nothing changes. They don’t have to do anything different to work with debugger extensions - just debug as normal. Today the debug protocol supports all essential debugging features, but does not support some advanced features of Eclipse debugger. We welcome contributors who want to work drive this forward with the community, see below for how to get involved!

The debug protocol implementation in LSP4E enables Eclipse IDE to quickly unlock support for new debuggers and debugger features. It also provides a way to support custom debuggers written in different programming languages. Tool developers can develop debug features in an agile and more efficient way for their users and keep up with the pace of fast-changing technology.

Support for latest version of the Language Server Protocol

As usual, LSP4E also comes with support for the latest version of the protocol by adopting the latest version of LSP4J API. As such, compared to previous 0.4.0 version, it includes support for interesting new operations and options. The most notable are supports for:

annotation lsp help you with debugging

More control and more logs for users and developers

In order to better enable complex language servers whose lifecycle is very dynamic, LSP4E has added the capability for LS integrators to refine when their LS is enabled for a given file, using a typical enabledWhen expression on the extension point.

LSP4E also now offers end users the possibility to disable a language server for a given file-type. It can be convenient for cases where the user has multiple extensions installed for a given file, one being based on language-server and finds that the one coming from language server is of inferior quality compared to the other one. Then, the user can disable the Language Server for those files in one click.

annotation lsp help you with debugging

And finally, to help Language Server and LSP4E developers and integrators in troubleshooting their integrations, some easy to trigger logging capabilities were added in LSP4E. A dedicated preference page allows to control them trivially.

annotation lsp help you with debugging

Possible integration with Docker

During the last release, LSP4E developers could also investigate possible integration with Docker-based language servers, and validate that with minimal effort, it’s possible to use a Docker image as a language server, including images developed for Eclipse Che. See the following demo:

Some next steps may happen during the summer on this topic. The project mailing-list is the best place to keep in touch.

LSP4E is your project too ;)

As any Eclipse.org project, LSP4E is a really open project. It’s everyone’s project. So if you’d like to learn more about it, or to simply be aware of the main discussions, or get involved in its continuous development, here are the entry-points:

It’s also important to acknowledge to work of many contributors to this release: Angelo Zerr, Alex Boyko, Elliotte Harold, Jonah Graham, Kris De Volder, Lucas Bullen, Lucia Jelinkova, Markus Ofterdinger, Martin Lippert, Mickael Istria, Philip Alldredge, Rastislav Wagner and Remy Suen.

About the Authors

Jonah Graham

Jonah Graham Kichwa Coders

Mickael Istria

Mickael Istria Red Hat

A fresh new issue delivered monthly

All Eclipse Newsletters available here

Back to the top

Language Server Protocol Specification - 3.17

This document describes the 3.17.x version of the language server protocol. An implementation for node of the 3.17.x version of the protocol can be found here .

Note: edits to this specification can be made via a pull request against this markdown document .

What’s new in 3.17

All new 3.17 features are tagged with a corresponding since version 3.17 text or in JSDoc using @since 3.17.0 annotation. Major new feature are: type hierarchy, inline values, inlay hints, notebook document support and a meta model that describes the 3.17 LSP version.

A detailed list of the changes can be found in the change log

The version of the specification is used to group features into a new specification release and to refer to their first appearance. Features in the spec are kept compatible using so called capability flags which are exchanged between the client and the server during initialization.

Base Protocol

The base protocol consists of a header and a content part (comparable to HTTP). The header and content part are separated by a ‘\r\n’.

The header part consists of header fields. Each header field is comprised of a name and a value, separated by ‘: ‘ (a colon and a space). The structure of header fields conform to the HTTP semantic . Each header field is terminated by ‘\r\n’. Considering the last header field and the overall header itself are each terminated with ‘\r\n’, and that at least one header is mandatory, this means that two ‘\r\n’ sequences always immediately precede the content part of a message.

Currently the following header fields are supported:

The header part is encoded using the ‘ascii’ encoding. This includes the ‘\r\n’ separating the header and content part.

Contains the actual content of the message. The content part of a message uses JSON-RPC to describe requests, responses and notifications. The content part is encoded using the charset provided in the Content-Type field. It defaults to utf-8 , which is the only encoding supported right now. If a server or client receives a header with a different encoding than utf-8 it should respond with an error.

(Prior versions of the protocol used the string constant utf8 which is not a correct encoding constant according to specification .) For backwards compatibility it is highly recommended that a client and a server treats the string utf8 as utf-8 .

Base Protocol JSON structures

The following TypeScript definitions describe the base JSON-RPC protocol :

The protocol use the following definitions for integers, unsigned integers, decimal numbers, objects and arrays:

Abstract Message

A general message as defined by JSON-RPC. The language server protocol always uses “2.0” as the jsonrpc version.

A request message to describe a request between the client and the server. Every processed request must send a response back to the sender of the request.

A Response Message sent as a result of a request. If a request doesn’t provide a result value the receiver of a request still needs to return a response message to conform to the JSON-RPC specification. The result property of the ResponseMessage should be set to null in this case to signal a successful request.

A notification message. A processed notification message must not send a response back. They work like events.

$ Notifications and Requests

Notification and requests whose methods start with ‘$/’ are messages which are protocol implementation dependent and might not be implementable in all clients or servers. For example if the server implementation uses a single threaded synchronous programming language then there is little a server can do to react to a $/cancelRequest notification. If a server or client receives notifications starting with ‘$/’ it is free to ignore the notification. If a server or client receives a request starting with ‘$/’ it must error the request with error code MethodNotFound (e.g. -32601 ).

Cancellation Support ( )

The base protocol offers support for request cancellation. To cancel a request, a notification message with the following properties is sent:

Notification :

A request that got canceled still needs to return from the server and send a response back. It can not be left open / hanging. This is in line with the JSON-RPC protocol that requires that every request sends a response back. In addition it allows for returning partial results on cancel. If the request returns an error response on cancellation it is advised to set the error code to ErrorCodes.RequestCancelled .

Progress Support ( )

Since version 3.15.0

The base protocol offers also support to report progress in a generic fashion. This mechanism can be used to report any kind of progress including work done progress (usually used to report progress in the user interface using a progress bar) and partial result progress to support streaming of results.

A progress notification has the following properties:

Progress is reported against a token. The token is different than the request ID which allows to report progress out of band and also for notification.

Language Server Protocol

The language server protocol defines a set of JSON-RPC request, response and notification messages which are exchanged using the above base protocol. This section starts describing the basic JSON structures used in the protocol. The document uses TypeScript interfaces in strict mode to describe these. This means for example that a null value has to be explicitly listed and that a mandatory property must be listed even if a falsify value might exist. Based on the basic JSON structures, the actual requests with their responses and the notifications are described.

An example would be a request send from the client to the server to request a hover value for a symbol at a certain position in a text document. The request’s method would be textDocument/hover with a parameter like this:

The result of the request would be the hover to be presented. In its simple form it can be a string. So the result looks like this:

Please also note that a response return value of null indicates no result. It doesn’t tell the client to resend the request.

In general, the language server protocol supports JSON-RPC messages, however the base protocol defined here uses a convention such that the parameters passed to request/notification messages should be of object type (if passed at all). However, this does not disallow using Array parameter types in custom messages.

The protocol currently assumes that one server serves one tool. There is currently no support in the protocol to share one server between different tools. Such a sharing would require additional protocol e.g. to lock a document to support concurrent editing.

Not every language server can support all features defined by the protocol. LSP therefore provides ‘capabilities’. A capability groups a set of language features. A development tool and the language server announce their supported features using capabilities. As an example, a server announces that it can handle the textDocument/hover request, but it might not handle the workspace/symbol request. Similarly, a development tool announces its ability to provide about to save notifications before a document is saved, so that a server can compute textual edits to format the edited document before it is saved.

The set of capabilities is exchanged between the client and server during the initialize request.

Request, Notification and Response Ordering

Responses to requests should be sent in roughly the same order as the requests appear on the server or client side. So for example if a server receives a textDocument/completion request and then a textDocument/signatureHelp request it will usually first return the response for the textDocument/completion and then the response for textDocument/signatureHelp .

However, the server may decide to use a parallel execution strategy and may wish to return responses in a different order than the requests were received. The server may do so as long as this reordering doesn’t affect the correctness of the responses. For example, reordering the result of textDocument/completion and textDocument/signatureHelp is allowed, as each of these requests usually won’t affect the output of the other. On the other hand, the server most likely should not reorder textDocument/definition and textDocument/rename requests, since executing the latter may affect the result of the former.

As said LSP defines a set of requests, responses and notifications. Each of those are documented using the following format:

Basic JSON Structures

There are quite some JSON structures that are shared between different requests and notifications. Their structure and capabilities are documented in this section.

URI’s are transferred as strings. The URI’s format is defined in https://tools.ietf.org/html/rfc3986

We also maintain a node module to parse a string into scheme , authority , path , query , and fragment URI components. The GitHub repository is https://github.com/Microsoft/vscode-uri the npm module is https://www.npmjs.com/package/vscode-uri .

Many of the interfaces contain fields that correspond to the URI of a document. For clarity, the type of such a field is declared as a DocumentUri . Over the wire, it will still be transferred as a string, but this guarantees that the contents of that string can be parsed as a valid URI.

There is also a tagging interface for normal non document URIs. It maps to a string as well.

Regular Expressions

Regular expression are a powerful tool and there are actual use cases for them in the language server protocol. However the downside with them is that almost every programming language has its own set of regular expression features so the specification can not simply refer to them as a regular expression. So the LSP uses a two step approach to support regular expressions:

Client Capability :

The following client capability is used to announce a client’s regular expression engine

The following table lists the well known engine values. Please note that the table should be driven by the community which integrates LSP into existing clients. It is not the goal of the spec to list all available regular expression engines.

Regular Expression Subset :

The following features from the ECMAScript 2020 regular expression specification are NOT mandatory for a client:

The only regular expression flag that a client needs to support is ‘i’ to specify a case insensitive search.

The protocol supports two kind of enumerations: (a) integer based enumerations and (b) strings based enumerations. Integer based enumerations usually start with 1 . The ones that don’t are historical and they were kept to stay backwards compatible. If appropriate the value set of an enumeration is announced by the defining side (e.g. client or server) and transmitted to the other side during the initialize handshake. An example is the CompletionItemKind enumeration. It is announced by the client using the textDocument.completion.completionItemKind client property.

To support the evolution of enumerations the using side of an enumeration shouldn’t fail on an enumeration value it doesn’t know. It should simply ignore it as a value it can use and try to do its best to preserve the value on round trips. Lets look at the CompletionItemKind enumeration as an example again: if in a future version of the specification an additional completion item kind with the value n gets added and announced by a client a (older) server not knowing about the value should not fail but simply ignore the value as a usable item kind.

The current protocol is tailored for textual documents whose content can be represented as a string. There is currently no support for binary documents. A position inside a document (see Position definition below) is expressed as a zero-based line and character offset.

New in 3.17

Prior to 3.17 the offsets were always based on a UTF-16 string representation. So in a string of the form a𐐀b the character offset of the character a is 0, the character offset of 𐐀 is 1 and the character offset of b is 3 since 𐐀 is represented using two code units in UTF-16. Since 3.17 clients and servers can agree on a different string encoding representation (e.g. UTF-8). The client announces it’s supported encoding via the client capability general.positionEncodings . The value is an array of position encodings the client supports, with decreasing preference (e.g. the encoding at index 0 is the most preferred one). To stay backwards compatible the only mandatory encoding is UTF-16 represented via the string utf-16 . The server can pick one of the encodings offered by the client and signals that encoding back to the client via the initialize result’s property capabilities.positionEncoding . If the string value utf-16 is missing from the client’s capability general.positionEncodings servers can safely assume that the client supports UTF-16. If the server omits the position encoding in its initialize result the encoding defaults to the string value utf-16 . Implementation considerations: since the conversion from one encoding into another requires the content of the file / line the conversion is best done where the file is read which is usually on the server side.

To ensure that both client and server split the string into the same line representation the protocol specifies the following end-of-line sequences: ‘\n’, ‘\r\n’ and ‘\r’. Positions are line end character agnostic. So you can not specify a position that denotes \r|\n or \n| where | represents the character offset.

Position in a text document expressed as zero-based line and zero-based character offset. A position is between two characters like an ‘insert’ cursor in an editor. Special values like for example -1 to denote the end of a line are not supported.

When describing positions the protocol needs to specify how offsets (specifically character offsets) should be interpreted. The corresponding PositionEncodingKind is negotiated between the client and the server during initialization.

A range in a text document expressed as (zero-based) start and end positions. A range is comparable to a selection in an editor. Therefore the end position is exclusive. If you want to specify a range that contains a line including the line ending character(s) then use an end position denoting the start of the next line. For example:

TextDocumentItem

An item to transfer a text document from the client to the server.

Text documents have a language identifier to identify a document on the server side when it handles more than one language to avoid re-interpreting the file extension. If a document refers to one of the programming languages listed below it is recommended that clients use those ids.

TextDocumentIdentifier

Text documents are identified using a URI. On the protocol level, URIs are passed as strings. The corresponding JSON structure looks like this:

VersionedTextDocumentIdentifier

An identifier to denote a specific version of a text document. This information usually flows from the client to the server.

An identifier which optionally denotes a specific version of a text document. This information usually flows from the server to the client.

TextDocumentPositionParams

Was TextDocumentPosition in 1.0 with inlined parameters.

A parameter literal used in requests to pass a text document and a position inside that document. It is up to the client to decide how a selection is converted into a position when issuing a request for a text document. The client can for example honor or ignore the selection direction to make LSP request consistent with features implemented internally.

DocumentFilter

A document filter denotes a document through properties like language , scheme or pattern . An example is a filter that applies to TypeScript files on disk. Another example is a filter that applies to JSON files with name package.json :

Please note that for a document filter to be valid at least one of the properties for language , scheme , or pattern must be set. To keep the type definition simple all properties are marked as optional.

A document selector is the combination of one or more document filters.

TextEdit & AnnotatedTextEdit

New in version 3.16: Support for AnnotatedTextEdit .

A textual edit applicable to a text document.

Since 3.16.0 there is also the concept of an annotated text edit which supports to add an annotation to a text edit. The annotation can add information describing the change to the text edit.

Usually clients provide options to group the changes along the annotations they are associated with. To support this in the protocol an edit or resource operation refers to a change annotation using an identifier and not the change annotation literal directly. This allows servers to use the identical annotation across multiple edits or resource operations which then allows clients to group the operations under that change annotation. The actual change annotations together with their identifiers are managed by the workspace edit via the new property changeAnnotations .

Complex text manipulations are described with an array of TextEdit ’s or AnnotatedTextEdit ’s, representing a single change to the document.

All text edits ranges refer to positions in the document they are computed on. They therefore move a document from state S1 to S2 without describing any intermediate state. Text edits ranges must never overlap, that means no part of the original document must be manipulated by more than one edit. However, it is possible that multiple edits have the same start position: multiple inserts, or any number of inserts followed by a single remove or replace edit. If multiple inserts have the same position, the order in the array defines the order in which the inserted strings appear in the resulting text.

TextDocumentEdit

New in version 3.16: support for AnnotatedTextEdit . The support is guarded by the client capability workspace.workspaceEdit.changeAnnotationSupport . If a client doesn’t signal the capability, servers shouldn’t send AnnotatedTextEdit literals back to the client.

Describes textual changes on a single text document. The text document is referred to as a OptionalVersionedTextDocumentIdentifier to allow clients to check the text document version before an edit is applied. A TextDocumentEdit describes all changes on a version Si and after they are applied move the document to version Si+1. So the creator of a TextDocumentEdit doesn’t need to sort the array of edits or do any kind of ordering. However the edits must be non overlapping.

Represents a location inside a resource, such as a line inside a text file.

LocationLink

Represents a link between a source and a target location.

Represents a diagnostic, such as a compiler error or warning. Diagnostic objects are only valid in the scope of a resource.

The protocol currently supports the following diagnostic severities and tags:

DiagnosticRelatedInformation is defined as follows:

CodeDescription is defined as follows:

Represents a reference to a command. Provides a title which will be used to represent a command in the UI. Commands are identified by a string identifier. The recommended way to handle commands is to implement their execution on the server side if the client and server provides the corresponding capabilities. Alternatively the tool extension code could handle the command. The protocol currently doesn’t specify a set of well-known commands.

MarkupContent

A MarkupContent literal represents a string value which content can be represented in different formats. Currently plaintext and markdown are supported formats. A MarkupContent is usually used in documentation properties of result literals like CompletionItem or SignatureInformation . If the format is markdown the content should follow the GitHub Flavored Markdown Specification .

In addition clients should signal the markdown parser they are using via the client capability general.markdown introduced in version 3.16.0 defined as follows:

Known markdown parsers used by clients right now are:

File Resource changes

New in version 3.13. Since version 3.16 file resource changes can carry an additional property changeAnnotation to describe the actual change in more detail. Whether a client has support for change annotations is guarded by the client capability workspace.workspaceEdit.changeAnnotationSupport .

File resource changes allow servers to create, rename and delete files and folders via the client. Note that the names talk about files but the operations are supposed to work on files and folders. This is in line with other naming in the Language Server Protocol (see file watchers which can watch files and folders). The corresponding change literals look as follows:

WorkspaceEdit

A workspace edit represents changes to many resources managed in the workspace. The edit should either provide changes or documentChanges . If the client can handle versioned document edits and if documentChanges are present, the latter are preferred over changes .

Since version 3.13.0 a workspace edit can contain resource operations (create, delete or rename files and folders) as well. If resource operations are present clients need to execute the operations in the order in which they are provided. So a workspace edit for example can consist of the following two changes: (1) create file a.txt and (2) a text document edit which insert text into file a.txt. An invalid sequence (e.g. (1) delete file a.txt and (2) insert text into file a.txt) will cause failure of the operation. How the client recovers from the failure is described by the client capability: workspace.workspaceEdit.failureHandling

WorkspaceEditClientCapabilities

New in version 3.13: ResourceOperationKind and FailureHandlingKind and the client capability workspace.workspaceEdit.resourceOperations as well as workspace.workspaceEdit.failureHandling .

The capabilities of a workspace edit has evolved over the time. Clients can describe their support using the following client capability:

Work done progress is reported using the generic $/progress notification. The value payload of a work done progress notification can be of three different forms.

Work Done Progress Begin

To start progress reporting a $/progress notification with the following payload must be sent:

Work Done Progress Report

Reporting progress is done using the following payload:

Work Done Progress End

Signaling the end of a progress reporting is done using the following payload:

Initiating Work Done Progress

Work Done progress can be initiated in two different ways:

Consider a client sending a textDocument/reference request to a server and the client accepts work done progress reporting on that request. To signal this to the server the client would add a workDoneToken property to the reference request parameters. Something like this:

The corresponding type definition for the parameter property looks like this:

A server uses the workDoneToken to report progress for the specific textDocument/reference . For the above request the $/progress notification params look like this:

The token received via the workDoneToken property in a request’s param literal is only valid as long as the request has not send a response back.

There is no specific client capability signaling whether a client will send a progress token per request. The reason for this is that this is in many clients not a static aspect and might even change for every request instance for the same request type. So the capability is signal on every request instance by the presence of a workDoneToken property.

To avoid that clients set up a progress monitor user interface before sending a request but the server doesn’t actually report any progress a server needs to signal general work done progress reporting support in the corresponding server capability. For the above find references example a server would signal such a support by setting the referencesProvider property in the server capabilities as follows:

The corresponding type definition for the server capability looks like this:

Servers can also initiate progress reporting using the window/workDoneProgress/create request. This is useful if the server needs to report progress outside of a request (for example the server needs to re-index a database). The token can then be used to report progress using the same notifications used as for client initiated progress. The token provided in the create request should only be used once (e.g. only one begin, many report and one end notification should be sent to it).

To keep the protocol backwards compatible servers are only allowed to use window/workDoneProgress/create request if the client signals corresponding support using the client capability window.workDoneProgress which is defined as follows:

Partial Result Progress

Partial results are also reported using the generic $/progress notification. The value payload of a partial result progress notification is in most cases the same as the final result. For example the workspace/symbol request has SymbolInformation[] | WorkspaceSymbol[] as the result type. Partial result is therefore also of type SymbolInformation[] | WorkspaceSymbol[] . Whether a client accepts partial result notifications for a request is signaled by adding a partialResultToken to the request parameter. For example, a textDocument/reference request that supports both work done and partial result progress might look like this:

The partialResultToken is then used to report partial results for the find references request.

If a server reports partial result via a corresponding $/progress , the whole result must be reported using n $/progress notifications. The final response has to be empty in terms of result values. This avoids confusion about how the final result should be interpreted, e.g. as another partial result or as a replacing result.

If the response errors the provided partial results should be treated as follows:

PartialResultParams

A parameter literal used to pass a partial result token.

A TraceValue represents the level of verbosity with which the server systematically reports its execution trace using $/logTrace notifications. The initial trace value is set by the client at initialization and can be modified later using the $/setTrace notification.

Server lifecycle

The current protocol specification defines that the lifecycle of a server is managed by the client (e.g. a tool like VS Code or Emacs). It is up to the client to decide when to start (process-wise) and when to shutdown a server.

Initialize Request ( )

The initialize request is sent as the first request from the client to the server. If the server receives a request or notification before the initialize request it should act as follows:

Until the server has responded to the initialize request with an InitializeResult , the client must not send any additional requests or notifications to the server. In addition the server is not allowed to send any requests or notifications to the client until it has responded with an InitializeResult , with the exception that during the initialize request the server is allowed to send the notifications window/showMessage , window/logMessage and telemetry/event as well as the window/showMessageRequest request to the client. In case the client sets up a progress token in the initialize params (e.g. property workDoneToken ) the server is also allowed to use that token (and only that token) using the $/progress notification sent from the server to the client.

The initialize request may only be sent once.

Where ClientCapabilities and TextDocumentClientCapabilities are defined as follows:

TextDocumentClientCapabilities

TextDocumentClientCapabilities define capabilities the editor / tool provides on text documents.

NotebookDocumentClientCapabilities

NotebookDocumentClientCapabilities define capabilities the editor / tool provides on notebook documents.

ClientCapabilities define capabilities for dynamic registration, workspace and text document features the client supports. The experimental can be used to pass experimental capabilities under development. For future compatibility a ClientCapabilities object literal can have more properties set than currently defined. Servers receiving a ClientCapabilities object literal with unknown properties should ignore these properties. A missing property should be interpreted as an absence of the capability. If a missing property normally defines sub properties, all missing sub properties should be interpreted as an absence of the corresponding capability.

Client capabilities got introduced with version 3.0 of the protocol. They therefore only describe capabilities that got introduced in 3.x or later. Capabilities that existed in the 2.x version of the protocol are still mandatory for clients. Clients cannot opt out of providing them. So even if a client omits the ClientCapabilities.textDocument.synchronization it is still required that the client provides text document synchronization (e.g. open, changed and close notifications).

The server can signal the following capabilities:

Initialized Notification ( )

The initialized notification is sent from the client to the server after the client received the result of the initialize request but before the client is sending any other request or notification to the server. The server can use the initialized notification for example to dynamically register capabilities. The initialized notification may only be sent once.

Register Capability ( )

The client/registerCapability request is sent from the server to the client to register for a new capability on the client side. Not all clients need to support dynamic capability registration. A client opts in via the dynamicRegistration property on the specific client capabilities. A client can even provide dynamic registration for capability A but not for capability B (see TextDocumentClientCapabilities as an example).

Server must not register the same capability both statically through the initialize result and dynamically for the same document selector. If a server wants to support both static and dynamic registration it needs to check the client capability in the initialize request and only register the capability statically if the client doesn’t support dynamic registration for that capability.

Where RegistrationParams are defined as follows:

Since most of the registration options require to specify a document selector there is a base interface that can be used. See TextDocumentRegistrationOptions .

An example JSON-RPC message to register dynamically for the textDocument/willSaveWaitUntil feature on the client side is as follows (only details shown):

This message is sent from the server to the client and after the client has successfully executed the request further textDocument/willSaveWaitUntil requests for JavaScript text documents are sent from the client to the server.

StaticRegistrationOptions can be used to register a feature in the initialize result with a given server control ID to be able to un-register the feature later on.

TextDocumentRegistrationOptions can be used to dynamically register for requests for a set of text documents.

Unregister Capability ( )

The client/unregisterCapability request is sent from the server to the client to unregister a previously registered capability.

Where UnregistrationParams are defined as follows:

An example JSON-RPC message to unregister the above registered textDocument/willSaveWaitUntil feature looks like this:

SetTrace Notification ( )

A notification that should be used by the client to modify the trace setting of the server.

LogTrace Notification ( )

A notification to log the trace of the server’s execution. The amount and content of these notifications depends on the current trace configuration. If trace is 'off' , the server should not send any logTrace notification. If trace is 'messages' , the server should not add the 'verbose' field in the LogTraceParams .

$/logTrace should be used for systematic trace reporting. For single debugging messages, the server should send window/logMessage notifications.

Shutdown Request ( )

The shutdown request is sent from the client to the server. It asks the server to shut down, but to not exit (otherwise the response might not be delivered correctly to the client). There is a separate exit notification that asks the server to exit. Clients must not send any notifications other than exit or requests to a server to which they have sent a shutdown request. Clients should also wait with sending the exit notification until they have received a response from the shutdown request.

If a server receives requests after a shutdown request those requests should error with InvalidRequest .

Exit Notification ( )

A notification to ask the server to exit its process. The server should exit with success code 0 if the shutdown request has been received before; otherwise with error code 1.

Text Document Synchronization

Client support for textDocument/didOpen , textDocument/didChange and textDocument/didClose notifications is mandatory in the protocol and clients can not opt out supporting them. This includes both full and incremental synchronization in the textDocument/didChange notification. In addition a server must either implement all three of them or none. Their capabilities are therefore controlled via a combined client and server capability. Opting out of text document synchronization makes only sense if the documents shown by the client are read only. Otherwise the server might receive request for documents, for which the content is managed in the client (e.g. they might have changed).

Controls whether text document synchronization supports dynamic registration.

Server Capability :

DidOpenTextDocument Notification ( )

The document open notification is sent from the client to the server to signal newly opened text documents. The document’s content is now managed by the client and the server must not try to read the document’s content using the document’s Uri. Open in this sense means it is managed by the client. It doesn’t necessarily mean that its content is presented in an editor. An open notification must not be sent more than once without a corresponding close notification send before. This means open and close notification must be balanced and the max open count for a particular textDocument is one. Note that a server’s ability to fulfill requests is independent of whether a text document is open or closed.

The DidOpenTextDocumentParams contain the language id the document is associated with. If the language id of a document changes, the client needs to send a textDocument/didClose to the server followed by a textDocument/didOpen with the new language id if the server handles the new language id as well.

Client Capability : See general synchronization client capabilities .

Server Capability : See general synchronization server capabilities .

Registration Options : TextDocumentRegistrationOptions

DidChangeTextDocument Notification ( )

The document change notification is sent from the client to the server to signal changes to a text document. Before a client can change a text document it must claim ownership of its content using the textDocument/didOpen notification. In 2.0 the shape of the params has changed to include proper version numbers.

Registration Options : TextDocumentChangeRegistrationOptions defined as follows:

WillSaveTextDocument Notification ( )

The document will save notification is sent from the client to the server before the document is actually saved. If a server has registered for open / close events clients should ensure that the document is open before a willSave notification is sent since clients can’t change the content of a file without ownership transferal.

The capability indicates that the client supports textDocument/willSave notifications.

The capability indicates that the server is interested in textDocument/willSave notifications.

WillSaveWaitUntilTextDocument Request ( )

The document will save request is sent from the client to the server before the document is actually saved. The request can return an array of TextEdits which will be applied to the text document before it is saved. Please note that clients might drop results if computing the text edits took too long or if a server constantly fails on this request. This is done to keep the save fast and reliable. If a server has registered for open / close events clients should ensure that the document is open before a willSaveWaitUntil notification is sent since clients can’t change the content of a file without ownership transferal.

The capability indicates that the client supports textDocument/willSaveWaitUntil requests.

The capability indicates that the server is interested in textDocument/willSaveWaitUntil requests.

DidSaveTextDocument Notification ( )

The document save notification is sent from the client to the server when the document was saved in the client.

The capability indicates that the client supports textDocument/didSave notifications.

The capability indicates that the server is interested in textDocument/didSave notifications.

Registration Options : TextDocumentSaveRegistrationOptions defined as follows:

DidCloseTextDocument Notification ( )

The document close notification is sent from the client to the server when the document got closed in the client. The document’s master now exists where the document’s Uri points to (e.g. if the document’s Uri is a file Uri the master now exists on disk). As with the open notification the close notification is about managing the document’s content. Receiving a close notification doesn’t mean that the document was open in an editor before. A close notification requires a previous open notification to be sent. Note that a server’s ability to fulfill requests is independent of whether a text document is open or closed.

Renaming a document

Document renames should be signaled to a server sending a document close notification with the document’s old name followed by an open notification using the document’s new name. Major reason is that besides the name other attributes can change as well like the language that is associated with the document. In addition the new document could not be of interest for the server anymore.

Servers can participate in a document rename by subscribing for the workspace/didRenameFiles notification or the workspace/willRenameFiles request.

The final structure of the TextDocumentSyncClientCapabilities and the TextDocumentSyncOptions server options look like this

Notebook Document Synchronization

Notebooks are becoming more and more popular. Adding support for them to the language server protocol allows notebook editors to reused language smarts provided by the server inside a notebook or a notebook cell, respectively. To reuse protocol parts and therefore server implementations notebooks are modeled in the following way in LSP:

The two concepts are defined as follows:

Next we describe how notebooks, notebook cells and the content of a notebook cell should be synchronized to a language server.

Syncing the text content of a cell is relatively easy since clients should model them as text documents. However since the URI of a notebook cell’s text document should be opaque, servers can not know its scheme nor its path. However what is know is the notebook document itself. We therefore introduce a special filter for notebook cell documents:

Given these structures a Python cell document in a Jupyter notebook stored on disk in a folder having books1 in its path can be identified as follows;

A NotebookCellTextDocumentFilter can be used to register providers for certain requests like code complete or hover. If such a provider is registered the client will send the corresponding textDocument/* requests to the server using the cell text document’s URI as the document URI.

There are cases where simply only knowing about a cell’s text content is not enough for a server to reason about the cells content and to provide good language smarts. Sometimes it is necessary to know all cells of a notebook document including the notebook document itself. Consider a notebook that has two JavaScript cells with the following content

Requesting code assist in cell two at the marked cursor position should propose the function add which is only possible if the server knows about cell one and cell two and knows that they belong to the same notebook document.

The protocol will therefore support two modes when it comes to synchronizing cell text content:

To request the cell content only a normal document selector can be used. For example the selector [{ language: 'python' }] will synchronize Python notebook document cells to the server. However since this might synchronize unwanted documents as well a document filter can also be a NotebookCellTextDocumentFilter . So { notebook: { scheme: 'file', notebookType: 'jupyter-notebook' }, language: 'python' } synchronizes all Python cells in a Jupyter notebook stored on disk.

To synchronize the whole notebook document a server provides a notebookDocumentSync in its server capabilities. For example:

Synchronizes the notebook including all Python cells to the server if the notebook is stored on disk.

The following client capabilities are defined for notebook documents:

The following server capabilities are defined for notebook documents:

Registration Options : NotebookDocumentRegistrationOptions defined as follows:

DidOpenNotebookDocument Notification ( )

The open notification is sent from the client to the server when a notebook document is opened. It is only sent by a client if the server requested the synchronization mode notebook in its notebookDocumentSync capability.

DidChangeNotebookDocument Notification ( )

The change notification is sent from the client to the server when a notebook document changes. It is only sent by a client if the server requested the synchronization mode notebook in its notebookDocumentSync capability.

DidSaveNotebookDocument Notification ( )

The save notification is sent from the client to the server when a notebook document is saved. It is only sent by a client if the server requested the synchronization mode notebook in its notebookDocumentSync capability.

DidCloseNotebookDocument Notification ( )

The close notification is sent from the client to the server when a notebook document is closed. It is only sent by a client if the server requested the synchronization mode notebook in its notebookDocumentSync capability.

Language Features

Language Features provide the actual smarts in the language server protocol. They are usually executed on a [text document, position] tuple. The main language feature categories are:

Goto Declaration Request ( )

Since version 3.14.0

The go to declaration request is sent from the client to the server to resolve the declaration location of a symbol at a given text document position.

The result type LocationLink [] got introduced with version 3.14.0 and depends on the corresponding client capability textDocument.declaration.linkSupport .

Registration Options : DeclarationRegistrationOptions defined as follows:

Goto Definition Request ( )

The go to definition request is sent from the client to the server to resolve the definition location of a symbol at a given text document position.

The result type LocationLink [] got introduced with version 3.14.0 and depends on the corresponding client capability textDocument.definition.linkSupport .

Registration Options : DefinitionRegistrationOptions defined as follows:

Goto Type Definition Request ( )

Since version 3.6.0

The go to type definition request is sent from the client to the server to resolve the type definition location of a symbol at a given text document position.

The result type LocationLink [] got introduced with version 3.14.0 and depends on the corresponding client capability textDocument.typeDefinition.linkSupport .

Registration Options : TypeDefinitionRegistrationOptions defined as follows:

Goto Implementation Request ( )

The go to implementation request is sent from the client to the server to resolve the implementation location of a symbol at a given text document position.

The result type LocationLink [] got introduced with version 3.14.0 and depends on the corresponding client capability textDocument.implementation.linkSupport .

Registration Options : ImplementationRegistrationOptions defined as follows:

Find References Request ( )

The references request is sent from the client to the server to resolve project-wide references for the symbol denoted by the given text document position.

Registration Options : ReferenceRegistrationOptions defined as follows:

Prepare Call Hierarchy Request ( )

Since version 3.16.0

The call hierarchy request is sent from the client to the server to return a call hierarchy for the language element of given text document positions. The call hierarchy requests are executed in two steps:

Registration Options : CallHierarchyRegistrationOptions defined as follows:

Call Hierarchy Incoming Calls ( )

The request is sent from the client to the server to resolve incoming calls for a given call hierarchy item. The request doesn’t define its own client and server capabilities. It is only issued if a server registers for the textDocument/prepareCallHierarchy request .

Call Hierarchy Outgoing Calls ( )

The request is sent from the client to the server to resolve outgoing calls for a given call hierarchy item. The request doesn’t define its own client and server capabilities. It is only issued if a server registers for the textDocument/prepareCallHierarchy request .

Prepare Type Hierarchy Request ( )

Since version 3.17.0

The type hierarchy request is sent from the client to the server to return a type hierarchy for the language element of given text document positions. Will return null if the server couldn’t infer a valid type from the position. The type hierarchy requests are executed in two steps:

Registration Options : TypeHierarchyRegistrationOptions defined as follows:

Type Hierarchy Supertypes( )

The request is sent from the client to the server to resolve the supertypes for a given type hierarchy item. Will return null if the server couldn’t infer a valid type from item in the params. The request doesn’t define its own client and server capabilities. It is only issued if a server registers for the textDocument/prepareTypeHierarchy request .

Type Hierarchy Subtypes( )

The request is sent from the client to the server to resolve the subtypes for a given type hierarchy item. Will return null if the server couldn’t infer a valid type from item in the params. The request doesn’t define its own client and server capabilities. It is only issued if a server registers for the textDocument/prepareTypeHierarchy request .

Document Highlights Request ( )

The document highlight request is sent from the client to the server to resolve a document highlights for a given text document position. For programming languages this usually highlights all references to the symbol scoped to this file. However we kept ‘textDocument/documentHighlight’ and ‘textDocument/references’ separate requests since the first one is allowed to be more fuzzy. Symbol matches usually have a DocumentHighlightKind of Read or Write whereas fuzzy or textual matches use Text as the kind.

Registration Options : DocumentHighlightRegistrationOptions defined as follows:

Document Link Request ( )

The document links request is sent from the client to the server to request the location of links in a document.

Registration Options : DocumentLinkRegistrationOptions defined as follows:

Document Link Resolve Request ( )

The document link resolve request is sent from the client to the server to resolve the target of a given document link.

Hover Request ( )

The hover request is sent from the client to the server to request hover information at a given text document position.

Registration Options : HoverRegistrationOptions defined as follows:

Where MarkedString is defined as follows:

Code Lens Request ( )

The code lens request is sent from the client to the server to compute code lenses for a given text document.

Registration Options : CodeLensRegistrationOptions defined as follows:

Code Lens Resolve Request ( )

The code lens resolve request is sent from the client to the server to resolve the command for a given code lens item.

Code Lens Refresh Request ( )

The workspace/codeLens/refresh request is sent from the server to the client. Servers can use it to ask clients to refresh the code lenses currently shown in editors. As a result the client should ask the server to recompute the code lenses for these editors. This is useful if a server detects a configuration change which requires a re-calculation of all code lenses. Note that the client still has the freedom to delay the re-calculation of the code lenses if for example an editor is currently not visible.

Folding Range Request ( )

Since version 3.10.0

The folding range request is sent from the client to the server to return all folding ranges found in a given text document.

Registration Options : FoldingRangeRegistrationOptions defined as follows:

Selection Range Request ( )

The selection range request is sent from the client to the server to return suggested selection ranges at an array of given positions. A selection range is a range around the cursor position which the user might be interested in selecting.

A selection range in the return array is for the position in the provided parameters at the same index. Therefore positions[i] must be contained in result[i].range. To allow for results where some positions have selection ranges and others do not, result[i].range is allowed to be the empty range at positions[i].

Typically, but not necessary, selection ranges correspond to the nodes of the syntax tree.

Registration Options : SelectionRangeRegistrationOptions defined as follows:

Document Symbols Request ( )

The document symbol request is sent from the client to the server. The returned result is either

Servers should whenever possible return DocumentSymbol since it is the richer data structure.

Registration Options : DocumentSymbolRegistrationOptions defined as follows:

Semantic Tokens ( )

The request is sent from the client to the server to resolve semantic tokens for a given file. Semantic tokens are used to add additional color information to a file that depends on language specific symbol information. A semantic token request usually produces a large result. The protocol therefore supports encoding tokens with numbers. In addition optional support for deltas is available.

General Concepts

Tokens are represented using one token type combined with n token modifiers. A token type is something like class or function and token modifiers are like static or async . The protocol defines a set of token types and modifiers but clients are allowed to extend these and announce the values they support in the corresponding client capability. The predefined values are:

The protocol defines an additional token format capability to allow future extensions of the format. The only format that is currently specified is relative expressing that the tokens are described using relative positions (see Integer Encoding for Tokens below).

Integer Encoding for Tokens

On the capability level types and modifiers are defined using strings. However the real encoding happens using numbers. The server therefore needs to let the client know which numbers it is using for which types and modifiers. They do so using a legend, which is defined as follows:

Token types are looked up by index, so a tokenType value of 1 means tokenTypes[1] . Since a token type can have n modifiers, multiple token modifiers can be set by using bit flags, so a tokenModifier value of 3 is first viewed as binary 0b00000011 , which means [tokenModifiers[0], tokenModifiers[1]] because bits 0 and 1 are set.

There are different ways how the position of a token can be expressed in a file. Absolute positions or relative positions. The protocol for the token format relative uses relative positions, because most tokens remain stable relative to each other when edits are made in a file. This simplifies the computation of a delta if a server supports it. So each token is represented using 5 integers. A specific token i in the file consists of the following array indices:

The deltaStart and the length values must be encoded using the encoding the client and server agrees on during the initialize request (see also TextDocuments ). Whether a token can span multiple lines is defined by the client capability multilineTokenSupport . If multiline tokens are not supported and a tokens length takes it past the end of the line, it should be treated as if the token ends at the end of the line and will not wrap onto the next line.

The client capability overlappingTokenSupport defines whether tokens can overlap each other.

Lets look at a concrete example which uses single line tokens without overlaps for encoding a file with 3 tokens in a number array. We start with absolute positions to demonstrate how they can easily be transformed into relative positions:

First of all, a legend must be devised. This legend must be provided up-front on registration and capture all possible token types and modifiers. For the example we use this legend:

The first transformation step is to encode tokenType and tokenModifiers as integers using the legend. As said, token types are looked up by index, so a tokenType value of 1 means tokenTypes[1] . Multiple token modifiers can be set by using bit flags, so a tokenModifier value of 3 is first viewed as binary 0b00000011 , which means [tokenModifiers[0], tokenModifiers[1]] because bits 0 and 1 are set. Using this legend, the tokens now are:

The next step is to represent each token relative to the previous token in the file. In this case, the second token is on the same line as the first token, so the startChar of the second token is made relative to the startChar of the first token, so it will be 10 - 5 . The third token is on a different line than the second token, so the startChar of the third token will not be altered:

Finally, the last step is to inline each of the 5 fields for a token in a single array, which is a memory friendly representation:

Now assume that the user types a new empty line at the beginning of the file which results in the following tokens in the file:

Running the same transformations as above will result in the following number array:

The delta is now expressed on these number arrays without any form of interpretation what these numbers mean. This is comparable to the text document edits send from the server to the client to modify the content of a file. Those are character based and don’t make any assumption about the meaning of the characters. So [ 2,5,3,0,3, 0,5,4,1,0, 3,2,7,2,0 ] can be transformed into [ 3,5,3,0,3, 0,5,4,1,0, 3,2,7,2,0] using the following edit description: { start: 0, deleteCount: 1, data: [3] } which tells the client to simply replace the first number (e.g. 2 ) in the array with 3 .

Semantic token edits behave conceptually like text edits on documents: if an edit description consists of n edits all n edits are based on the same state Sm of the number array. They will move the number array from state Sm to Sm+1. A client applying the edits must not assume that they are sorted. An easy algorithm to apply them to the number array is to sort the edits and apply them from the back to the front of the number array.

The following client capabilities are defined for semantic token requests sent from the client to the server:

The following server capabilities are defined for semantic tokens:

Registration Options : SemanticTokensRegistrationOptions defined as follows:

Since the registration option handles range, full and delta requests the method used to register for semantic tokens requests is textDocument/semanticTokens and not one of the specific methods described below.

Requesting semantic tokens for a whole file

Requesting semantic token delta for a whole file

Requesting semantic tokens for a range

There are two uses cases where it can be beneficial to only compute semantic tokens for a visible range:

A server is allowed to compute the semantic tokens for a broader range than requested by the client. However if the server does the semantic tokens for the broader range must be complete and correct.

Requesting a refresh of all semantic tokens

The workspace/semanticTokens/refresh request is sent from the server to the client. Servers can use it to ask clients to refresh the editors for which this server provides semantic tokens. As a result the client should ask the server to recompute the semantic tokens for these editors. This is useful if a server detects a project wide configuration change which requires a re-calculation of all semantic tokens. Note that the client still has the freedom to delay the re-calculation of the semantic tokens if for example an editor is currently not visible.

Inlay Hint Request ( )

The inlay hints request is sent from the client to the server to compute inlay hints for a given [text document, range] tuple that may be rendered in the editor in place with other text.

Registration Options : InlayHintRegistrationOptions defined as follows:

Inlay Hint Resolve Request ( )

The request is sent from the client to the server to resolve additional information for a given inlay hint. This is usually used to compute the tooltip , location or command properties of an inlay hint’s label part to avoid its unnecessary computation during the textDocument/inlayHint request.

Consider the clients announces the label.location property as a property that can be resolved lazy using the client capability

then an inlay hint with a label part without a location needs to be resolved using the inlayHint/resolve request before it can be used.

Inlay Hint Refresh Request ( )

The workspace/inlayHint/refresh request is sent from the server to the client. Servers can use it to ask clients to refresh the inlay hints currently shown in editors. As a result the client should ask the server to recompute the inlay hints for these editors. This is useful if a server detects a configuration change which requires a re-calculation of all inlay hints. Note that the client still has the freedom to delay the re-calculation of the inlay hints if for example an editor is currently not visible.

Inline Value Request ( )

The inline value request is sent from the client to the server to compute inline values for a given text document that may be rendered in the editor at the end of lines.

Registration Options : InlineValueRegistrationOptions defined as follows:

Inline Value Refresh Request ( )

The workspace/inlineValue/refresh request is sent from the server to the client. Servers can use it to ask clients to refresh the inline values currently shown in editors. As a result the client should ask the server to recompute the inline values for these editors. This is useful if a server detects a configuration change which requires a re-calculation of all inline values. Note that the client still has the freedom to delay the re-calculation of the inline values if for example an editor is currently not visible.

Monikers ( )

Language Server Index Format (LSIF) introduced the concept of symbol monikers to help associate symbols across different indexes. This request adds capability for LSP server implementations to provide the same symbol moniker information given a text document position. Clients can utilize this method to get the moniker at the current location in a file user is editing and do further code navigation queries in other services that rely on LSIF indexes and link symbols together.

The textDocument/moniker request is sent from the client to the server to get the symbol monikers for a given text document position. An array of Moniker types is returned as response to indicate possible monikers at the given location. If no monikers can be calculated, an empty array or null should be returned.

Client Capabilities :

Registration Options : MonikerRegistrationOptions defined as follows:

Moniker is defined as follows:

Server implementations of this method should ensure that the moniker calculation matches to those used in the corresponding LSIF implementation to ensure symbols can be associated correctly across IDE sessions and LSIF indexes.

Completion Request ( )

The Completion request is sent from the client to the server to compute completion items at a given cursor position. Completion items are presented in the IntelliSense user interface. If computing full completion items is expensive, servers can additionally provide a handler for the completion item resolve request (‘completionItem/resolve’). This request is sent when a completion item is selected in the user interface. A typical use case is for example: the textDocument/completion request doesn’t fill in the documentation property for returned completion items since it is expensive to compute. When the item is selected in the user interface then a ‘completionItem/resolve’ request is sent with the selected completion item as a parameter. The returned completion item should have the documentation property filled in. By default the request can only delay the computation of the detail and documentation properties. Since 3.16.0 the client can signal that it can resolve more properties lazily. This is done using the completionItem#resolveSupport client capability which lists all properties that can be filled in during a ‘completionItem/resolve’ request. All other properties (usually sortText , filterText , insertText and textEdit ) must be provided in the textDocument/completion response and must not be changed during resolve.

The language server protocol uses the following model around completions:

A completion item provides additional means to influence filtering and sorting. They are expressed by either creating a CompletionItem with a insertText or with a textEdit . The two modes differ as follows:

Completion item provides an insertText / label without a text edit : in the model the client should filter against what the user has already typed using the word boundary rules of the language (e.g. resolving the word under the cursor position). The reason for this mode is that it makes it extremely easy for a server to implement a basic completion list and get it filtered on the client.

Completion Item with text edits : in this mode the server tells the client that it actually knows what it is doing. If you create a completion item with a text edit at the current cursor position no word guessing takes place and no filtering should happen. This mode can be combined with a sort text and filter text to customize two things. If the text edit is a replace edit then the range denotes the word used for filtering. If the replace changes the text it most likely makes sense to specify a filter text to be used.

Registration Options : CompletionRegistrationOptions options defined as follows:

Completion items support snippets (see InsertTextFormat.Snippet ). The snippet format is as follows:

Snippet Syntax

The body of a snippet can use special constructs to control cursors and the text being inserted. The following are supported features and their syntaxes:

With tab stops, you can make the editor cursor move inside a snippet. Use $1 , $2 to specify cursor locations. The number is the order in which tab stops will be visited, whereas $0 denotes the final cursor position. Multiple tab stops are linked and updated in sync.

Placeholders

Placeholders are tab stops with values, like ${1:foo} . The placeholder text will be inserted and selected such that it can be easily changed. Placeholders can be nested, like ${1:another ${2:placeholder}} .

Placeholders can have choices as values. The syntax is a comma separated enumeration of values, enclosed with the pipe-character, for example ${1|one,two,three|} . When the snippet is inserted and the placeholder selected, choices will prompt the user to pick one of the values.

With $name or ${name:default} you can insert the value of a variable. When a variable isn’t set, its default or the empty string is inserted. When a variable is unknown (that is, its name isn’t defined) the name of the variable is inserted and it is transformed into a placeholder.

The following variables can be used:

Variable Transforms

Transformations allow you to modify the value of a variable before it is inserted. The definition of a transformation consists of three parts:

The following example inserts the name of the current file without its ending, so from foo.txt it makes foo .

Below is the EBNF ( extended Backus-Naur form ) for snippets. With \ (backslash), you can escape $ , } and \ . Within choice elements, the backslash also escapes comma and pipe characters.

Completion Item Resolve Request ( )

The request is sent from the client to the server to resolve additional information for a given completion item.

PublishDiagnostics Notification ( )

Diagnostics notification are sent from the server to the client to signal results of validation runs.

Diagnostics are “owned” by the server so it is the server’s responsibility to clear them if necessary. The following rule is used for VS Code servers that generate diagnostics:

When a file changes it is the server’s responsibility to re-compute diagnostics and push them to the client. If the computed set is empty it has to push the empty array to clear former diagnostics. Newly pushed diagnostics always replace previously pushed diagnostics. There is no merging that happens on the client side.

See also the Diagnostic section.

Diagnostics are currently published by the server to the client using a notification. This model has the advantage that for workspace wide diagnostics the server has the freedom to compute them at a server preferred point in time. On the other hand the approach has the disadvantage that the server can’t prioritize the computation for the file in which the user types or which are visible in the editor. Inferring the client’s UI state from the textDocument/didOpen and textDocument/didChange notifications might lead to false positives since these notifications are ownership transfer notifications.

The specification therefore introduces the concept of diagnostic pull requests to give a client more control over the documents for which diagnostics should be computed and at which point in time.

Registration Options : DiagnosticRegistrationOptions options defined as follows:

Document Diagnostics( )

The text document diagnostic request is sent from the client to the server to ask the server to compute the diagnostics for a given document. As with other pull requests the server is asked to compute the diagnostics for the currently synced version of the document.

Workspace Diagnostics( )

The workspace diagnostic request is sent from the client to the server to ask the server to compute workspace wide diagnostics which previously where pushed from the server to the client. In contrast to the document diagnostic request the workspace request can be long running and is not bound to a specific workspace or document state. If the client supports streaming for the workspace diagnostic pull it is legal to provide a document diagnostic report multiple times for the same document URI. The last one reported will win over previous reports.

If a client receives a diagnostic report for a document in a workspace diagnostic request for which the client also issues individual document diagnostic pull requests the client needs to decide which diagnostics win and should be presented. In general:

Diagnostics Refresh( )

The workspace/diagnostic/refresh request is sent from the server to the client. Servers can use it to ask clients to refresh all needed document and workspace diagnostics. This is useful if a server detects a project wide configuration change which requires a re-calculation of all diagnostics.

Generally the language server specification doesn’t enforce any specific client implementation since those usually depend on how the client UI behaves. However since diagnostics can be provided on a document and workspace level here are some tips:

Signature Help Request ( )

The signature help request is sent from the client to the server to request signature information at a given cursor position.

Registration Options : SignatureHelpRegistrationOptions defined as follows:

Code Action Request ( )

The code action request is sent from the client to the server to compute commands for a given text document and range. These commands are typically code fixes to either fix problems or to beautify/refactor code. The result of a textDocument/codeAction request is an array of Command literals which are typically presented in the user interface. To ensure that a server is useful in many clients the commands specified in a code actions should be handled by the server and not by the client (see workspace/executeCommand and ServerCapabilities.executeCommandProvider ). If the client supports providing edits with a code action then that mode should be used.

Since version 3.16.0: a client can offer a server to delay the computation of code action properties during a ‘textDocument/codeAction’ request:

This is useful for cases where it is expensive to compute the value of a property (for example the edit property). Clients signal this through the codeAction.resolveSupport capability which lists all properties a client can resolve lazily. The server capability codeActionProvider.resolveProvider signals that a server will offer a codeAction/resolve route. To help servers to uniquely identify a code action in the resolve request, a code action literal can optional carry a data property. This is also guarded by an additional client capability codeAction.dataSupport . In general, a client should offer data support if it offers resolve support. It should also be noted that servers shouldn’t alter existing attributes of a code action in a codeAction/resolve request.

Since version 3.8.0: support for CodeAction literals to enable the following scenarios:

Clients need to announce their support for code action literals (e.g. literals of type CodeAction ) and code action kinds via the corresponding client capability codeAction.codeActionLiteralSupport .

Registration Options : CodeActionRegistrationOptions defined as follows:

Code Action Resolve Request ( )

The request is sent from the client to the server to resolve additional information for a given code action. This is usually used to compute the edit property of a code action to avoid its unnecessary computation during the textDocument/codeAction request.

Consider the clients announces the edit property as a property that can be resolved lazy using the client capability

then a code action

needs to be resolved using the codeAction/resolve request before it can be applied.

Document Color Request ( )

The document color request is sent from the client to the server to list all color references found in a given text document. Along with the range, a color value in RGB is returned.

Clients can use the result to decorate color references in an editor. For example:

Registration Options : DocumentColorRegistrationOptions defined as follows:

Color Presentation Request ( )

The color presentation request is sent from the client to the server to obtain a list of presentations for a color value at a given location. Clients can use the result to

This request has no special capabilities and registration options since it is send as a resolve request for the textDocument/documentColor request.

Document Formatting Request ( )

The document formatting request is sent from the client to the server to format a whole document.

Registration Options : DocumentFormattingRegistrationOptions defined as follows:

Document Range Formatting Request ( )

The document range formatting request is sent from the client to the server to format a given range in a document.

Document on Type Formatting Request ( )

The document on type formatting request is sent from the client to the server to format parts of the document during typing.

Registration Options : DocumentOnTypeFormattingRegistrationOptions defined as follows:

Rename Request ( )

The rename request is sent from the client to the server to ask the server to compute a workspace change so that the client can perform a workspace-wide rename of a symbol.

RenameOptions may only be specified if the client states that it supports prepareSupport in its initial initialize request.

Registration Options : RenameRegistrationOptions defined as follows:

Prepare Rename Request ( )

Since version 3.12.0

The prepare rename request is sent from the client to the server to setup and test the validity of a rename operation at a given location.

Linked Editing Range( )

The linked editing request is sent from the client to the server to return for a given position in a document the range of the symbol at the position and all ranges that have the same content. Optionally a word pattern can be returned to describe valid contents. A rename to one of the ranges can be applied to all other ranges if the new content is valid. If no result-specific word pattern is provided, the word pattern from the client’s language configuration is used.

Registration Options : LinkedEditingRangeRegistrationOptions defined as follows:

Workspace Features

Workspace symbols request ( ).

The workspace symbol request is sent from the client to the server to list project-wide symbols matching the query string. Since 3.17.0 servers can also provider a handler for workspaceSymbol/resolve requests. This allows servers to return workspace symbols without a range for a workspace/symbol request. Clients then need to resolve the range when necessary using the workspaceSymbol/resolve request. Servers can only use this new model if clients advertise support for it via the workspace.symbol.resolveSupport capability.

Registration Options : WorkspaceSymbolRegistrationOptions defined as follows:

Workspace Symbol Resolve Request ( )

The request is sent from the client to the server to resolve additional information for a given workspace symbol.

Configuration Request ( )

The workspace/configuration request is sent from the server to the client to fetch configuration settings from the client. The request can fetch several configuration settings in one roundtrip. The order of the returned configuration settings correspond to the order of the passed ConfigurationItems (e.g. the first item in the response is the result for the first configuration item in the params).

A ConfigurationItem consists of the configuration section to ask for and an additional scope URI. The configuration section asked for is defined by the server and doesn’t necessarily need to correspond to the configuration store used by the client. So a server might ask for a configuration cpp.formatterOptions but the client stores the configuration in an XML store layout differently. It is up to the client to do the necessary conversion. If a scope URI is provided the client should return the setting scoped to the provided resource. If the client for example uses EditorConfig to manage its settings the configuration should be returned for the passed resource URI. If the client can’t provide a configuration setting for a given scope then null needs to be present in the returned array.

DidChangeConfiguration Notification ( )

A notification sent from the client to the server to signal the change of configuration settings.

Workspace folders request ( )

Many tools support more than one root folder per workspace. Examples for this are VS Code’s multi-root support, Atom’s project folder support or Sublime’s project support. If a client workspace consists of multiple roots then a server typically needs to know about this. The protocol up to now assumes one root folder which is announced to the server by the rootUri property of the InitializeParams . If the client supports workspace folders and announces them via the corresponding workspaceFolders client capability, the InitializeParams contain an additional property workspaceFolders with the configured workspace folders when the server starts.

The workspace/workspaceFolders request is sent from the server to the client to fetch the current open list of workspace folders. Returns null in the response if only a single file is open in the tool. Returns an empty array if a workspace is open but no folders are configured.

DidChangeWorkspaceFolders Notification ( )

The workspace/didChangeWorkspaceFolders notification is sent from the client to the server to inform the server about workspace folder configuration changes. The notification is sent by default if both client capability workspace.workspaceFolders and the server capability workspace.workspaceFolders.supported are true; or if the server has registered itself to receive this notification. To register for the workspace/didChangeWorkspaceFolders send a client/registerCapability request from the server to the client. The registration parameter must have a registrations item of the following form, where id is a unique id used to unregister the capability (the example uses a UUID):

WillCreateFiles Request ( )

The will create files request is sent from the client to the server before files are actually created as long as the creation is triggered from within the client either by a user action or by applying a workspace edit. The request can return a WorkspaceEdit which will be applied to workspace before the files are created. Hence the WorkspaceEdit can not manipulate the content of the files to be created. Please note that clients might drop results if computing the edit took too long or if a server constantly fails on this request. This is done to keep creates fast and reliable.

The capability indicates that the client supports sending workspace/willCreateFiles requests.

The capability indicates that the server is interested in receiving workspace/willCreateFiles requests.

Registration Options : none

DidCreateFiles Notification ( )

The did create files notification is sent from the client to the server when files were created from within the client.

The capability indicates that the client supports sending workspace/didCreateFiles notifications.

The capability indicates that the server is interested in receiving workspace/didCreateFiles notifications.

WillRenameFiles Request ( )

The will rename files request is sent from the client to the server before files are actually renamed as long as the rename is triggered from within the client either by a user action or by applying a workspace edit. The request can return a WorkspaceEdit which will be applied to workspace before the files are renamed. Please note that clients might drop results if computing the edit took too long or if a server constantly fails on this request. This is done to keep renames fast and reliable.

The capability indicates that the client supports sending workspace/willRenameFiles requests.

The capability indicates that the server is interested in receiving workspace/willRenameFiles requests.

DidRenameFiles Notification ( )

The did rename files notification is sent from the client to the server when files were renamed from within the client.

The capability indicates that the client supports sending workspace/didRenameFiles notifications.

The capability indicates that the server is interested in receiving workspace/didRenameFiles notifications.

WillDeleteFiles Request ( )

The will delete files request is sent from the client to the server before files are actually deleted as long as the deletion is triggered from within the client either by a user action or by applying a workspace edit. The request can return a WorkspaceEdit which will be applied to workspace before the files are deleted. Please note that clients might drop results if computing the edit took too long or if a server constantly fails on this request. This is done to keep deletes fast and reliable.

The capability indicates that the client supports sending workspace/willDeleteFiles requests.

The capability indicates that the server is interested in receiving workspace/willDeleteFiles requests.

DidDeleteFiles Notification ( )

The did delete files notification is sent from the client to the server when files were deleted from within the client.

The capability indicates that the client supports sending workspace/didDeleteFiles notifications.

The capability indicates that the server is interested in receiving workspace/didDeleteFiles notifications.

DidChangeWatchedFiles Notification ( )

The watched files notification is sent from the client to the server when the client detects changes to files and folders watched by the language client (note although the name suggest that only file events are sent it is about file system events which include folders as well). It is recommended that servers register for these file system events using the registration mechanism. In former implementations clients pushed file events without the server actively asking for it.

Servers are allowed to run their own file system watching mechanism and not rely on clients to provide file system events. However this is not recommended due to the following reasons:

Registration Options : DidChangeWatchedFilesRegistrationOptions defined as follows:

Where FileEvents are described as follows:

Execute a command ( )

The workspace/executeCommand request is sent from the client to the server to trigger command execution on the server. In most cases the server creates a WorkspaceEdit structure and applies the changes to the workspace using the request workspace/applyEdit which is sent from the server to the client.

Registration Options : ExecuteCommandRegistrationOptions defined as follows:

The arguments are typically specified when a command is returned from the server to the client. Example requests that return a command are textDocument/codeAction or textDocument/codeLens .

Applies a WorkspaceEdit ( )

The workspace/applyEdit request is sent from the server to the client to modify resource on the client side.

See also the WorkspaceEditClientCapabilities for the supported capabilities of a workspace edit.

Window Features

Showmessage notification ( ).

The show message notification is sent from a server to a client to ask the client to display a particular message in the user interface.

Where the type is defined as follows:

ShowMessage Request ( )

The show message request is sent from a server to a client to ask the client to display a particular message in the user interface. In addition to the show message notification the request allows to pass actions and to wait for an answer from the client.

Where the MessageActionItem is defined as follows:

Show Document Request ( )

New in version 3.16.0

The show document request is sent from a server to a client to ask the client to display a particular resource referenced by a URI in the user interface.

LogMessage Notification ( )

The log message notification is sent from the server to the client to ask the client to log a particular message.

Create Work Done Progress ( )

The window/workDoneProgress/create request is sent from the server to the client to ask the client to create a work done progress.

Cancel a Work Done Progress ( )

The window/workDoneProgress/cancel notification is sent from the client to the server to cancel a progress initiated on the server side using the window/workDoneProgress/create . The progress need not be marked as cancellable to be cancelled and a client may cancel a progress for any number of reasons: in case of error, reloading a workspace etc.

Telemetry Notification ( )

The telemetry notification is sent from the server to the client to ask the client to log a telemetry event. The protocol doesn’t specify the payload since no interpretation of the data happens in the protocol. Most clients even don’t handle the event directly but forward them to the extensions owing the corresponding server issuing the event.

Miscellaneous

Language servers usually run in a separate process and clients communicate with them in an asynchronous fashion. Additionally clients usually allow users to interact with the source code even if request results are pending. We recommend the following implementation pattern to avoid that clients apply outdated response results:

Servers usually support different communication channels (e.g. stdio, pipes, …). To ease the usage of servers in different clients it is highly recommended that a server implementation supports the following command line arguments to pick the communication channel:

To support the case that the editor starting a server crashes an editor should also pass its process id to the server. This allows the server to monitor the editor process and to shutdown itself if the editor process dies. The process id passed on the command line should be the same as the one passed in the initialize parameters. The command line argument to use is --clientProcessId .

Since 3.17 there is a meta model describing the LSP protocol:

3.17.0 (05/10/2022)

3.16.0 (12/14/2020)

3.15.0 (01/14/2020)

3.14.0 (12/13/2018)

3.13.0 (9/11/2018)

3.12.0 (8/23/2018)

3.11.0 (8/21/2018)

3.10.0 (7/23/2018)

3.9.0 (7/10/2018)

3.8.0 (6/11/2018)

3.7.0 (4/5/2018)

3.6.0 (2/22/2018)

Merge the proposed protocol for workspace folders, configuration, go to type definition, go to implementation and document color provider into the main branch of the specification. For details see:

In addition we enhanced the CompletionTriggerKind with a new value TriggerForIncompleteCompletions: 3 = 3 to signal the a completion request got trigger since the last result was incomplete.

Decided to skip this version to bring the protocol version number in sync the with npm module vscode-languageserver-protocol.

3.4.0 (11/27/2017)

3.3.0 (11/24/2017)

3.2.0 (09/26/2017)

3.1.0 (02/28/2017)

3.0 Version

annotation lsp help you with debugging

OCaml Language Server Protocol implementation

ocaml/ocaml-lsp

Name already in use.

Use Git or checkout with SVN using the web URL.

Work fast with our official CLI. Learn more .

Sign In Required

Please sign in to use Codespaces.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching Xcode

If nothing happens, download Xcode and try again.

Launching Visual Studio Code

Your codespace will open once ready.

There was a problem preparing your codespace, please try again.

Latest commit

@sim642

OCaml-LSP is a language server for OCaml that implements Language Server Protocol (LSP).

If you use Visual Studio Code, see OCaml Platform extension page for detailed instructions on setting up your editor for OCaml development with OCaml-LSP: what packages need to be installed, how to configure your project and get most out of the OCaml editor support, and how to report and debug problems.

Installing from sources

Additional package installations, integration with dune rpc, merlin configuration (advanced), semantic highlighting, lsp extensions, unusual features, contributing to project, relationship to other tools, comparison to other lsp servers for ocaml, installation.

Below we show how to install OCaml-LSP using opam, esy, and from sources. OCaml-LSP comes in a package called ocaml-lsp-server but the installed program (i.e., binary) is called ocamllsp .

Installing with package managers

To install the language server in the currently used opam switch :

Note: you will need to install ocaml-lsp-server in every switch where you would like to use it.

To add the language server to an esy project, run in terminal:

This project uses submodules to handle dependencies. This is done so that users who install ocaml-lsp-server into their sandbox will not share dependency constraints on the same packages that ocaml-lsp-server is using.

Install ocamlformat package if you want source file formatting support.

Note: To have source file formatting support in your project, there needs to be an .ocamlformat file present in your project's root directory.

OCaml-LSP also uses a program called ocamlformat-rpc to format code that is either generated or displayed by OCaml-LSP, e.g., when you hover over a module identifier, you can see its typed nicely formatted. This program comes with ocamlformat (version > 0.21.0). Previously, it was a standalone package.

Usually, your code editor, or some extension/plugin that you install on it, is responsible for launching ocamllsp .

Important: OCaml Language Server has its information about the files from the last time your built your project. We recommend using the Dune build system and running it in "watch" mode to always have correctly functioning OCaml-LSP, e.g., dune build --watch .

since OCaml-LSP 1.11.0

OCaml-LSP can communicate wit Dune's RPC system to offer some interesting features. User can launch Dune's RPC system by running Dune in watch mode. OCaml-LSP will not launch Dune's RPC for you. But OCaml-LSP will see if there is an RPC running and will communicate with it automatically.

There are various interesting features and caveats:

Dune's RPC enables new kinds of diagnostics (i.e., warnings and errors) to be shown in the editor, e.g., mismatching interface and implementation files. You need to save the file to refresh such diagnostics because Dune doesn't see unsaved files; otherwise, you may see stale (no longer correct) warnings or errors. OCaml-LSP updates diagnostics after each build is complete in watch mode.

Dune file promotion support. If you, for example, use ppx_expect and have failing tests, you will get a diagnostic when Dune reports that your file can be promoted. You can promote your file using the code action Promote .

If you would like OCaml-LSP to respect your .merlin files, OCaml-LSP needs to be invoked with --fallback-read-dot-merlin argument passed to it.

The server supports the following LSP requests (inexhaustive list):

Note that degrees of support for each LSP request are varying.

since OCaml-LSP 1.15.0 (since version 1.15.0-4.14 for OCaml 4, 1.15.0-5.0 for OCaml 5)

Semantic highlighting support is enabled by default.

since OCaml-LSP 1.14.0

OCaml-LSP implements experimental semantic highlighting support (also known as semantic tokens support). The support can be activated by passing an evironment variable to OCaml-LSP:

To enable non-incremental (expectedly slower but more stable) version, pass OCAMLLSP_SEMANTIC_HIGHLIGHTING=full environment variable to OCaml-LSP.

To enable incremental (potentially faster but more error-prone, at least on VS Code) version, pass OCAMLLSP_SEMANTIC_HIGHLIGHTING=full/delta to OCaml-LSP.

Tip (for VS Code OCaml Platform users): You can use ocaml.server.extraEnv setting in VS Code to pass various environment variables to OCaml-LSP.

The server also supports a number of OCaml specific extensions to the protocol:

Note that editor support for these extensions varies. In general, the OCaml Platform extension for Visual Studio Code will have the best support.

Destructing a value

since OCaml-LSP 1.0.0

OCaml-LSP has a code action that allows to generate an exhaustive pattern matching for values. For example, placing a cursor near a value (Some 10)| where | is your cursor, OCaml-LSP will offer a code action "Destruct", which replaces (Some 10) with (match Some with | None -> _ | Some _ -> _) . Importantly, one can only destruct a value if OCaml-LSP can infer the value's precise type. The value can be type-annotated, e.g., if it's a function argument with polymorphic (or yet unknown) type in this context. In the code snippet below, we type-annotate the function parameter v because when we type let f v = v| , the type of v is polymorphic, so we can't destruct it.

You can also usually destruct the value by placing the cursor on the wildcard ( _ ) pattern in a pattern-match. For example,

invoking destruct near the cursor ( | ) in the snippet above, you get

Importantly, note the undescores in place of expressions in each branch of the pattern match above. The underscores that occur in place of expressions are called "typed holes" - a concept explained below.

Tip (formatting): generated code may not be greatly formatted. If your project uses a formatter such as OCamlFormat, you can run formatting and get a well-formatted document (OCamlFormat supports typed holes formatting).

Tip (for VS Code OCaml Platform users): You can destruct a value using a keybinding Alt + D or on MacOS Option + D

Typed holes

since OCaml-LSP 1.8.0

OCaml-LSP has a concept of a "typed hole" syntactically represented as _ (underscore). A typed hole represents a well-typed "substitute" for an expression. OCaml-LSP considers these underscores that occur in place of expressions as a valid well-typed OCaml program: let foo : int = _ (the typed hole has type int here) or let bar = _ 10 (the hole has type int -> 'a ). One can use such holes during development as temporary substitutes for expressions and "plug" the holes later with appropriate expressions.

Note, files that incorporate typed holes are not considered valid OCaml by the OCaml compiler and, hence, cannot be compiled.

Also, an underscore occurring in place of a pattern (for example let _ = 10 ) should not be confused with a typed hole that occurs in place of an expression, e.g., let a = _ .

Constructing values by type (experimental)

OCaml-LSP can "construct" expressions based on the type required and offer them during auto-completion. For example, typing _ (typed hole) in the snippet below will trigger auto-completion ( | is your cursor):

The auto-completion offers completions Foo.A and Foo.B _ . You can further construct values by placing the cursor as such: Foo.B _| and triggering code action "Construct an expression" which offers completions None and Some _ . Trigger the same code action in Some _| will offer "" - one of the possible expressions to replace the typed hole with.

Constructing a value is thus triggered either by typing _ in place of an expression or trigger the code action "Construct an Expression". Also, the type of the value needs to be non-polymorphic to construct a meaningful value.

Tip (for VS Code OCaml Platform users): You can construct a value using a keybinding Alt + C or on MacOS Option + C

If you use Visual Studio Code, please see OCaml Platform extension page for a detailed guide on how to report and debug problems.

If you use another code editor and use OCaml-LSP, you should be able to set the server trace to verbose using your editor's LSP client and watch the trace for errors such as logged exceptions.

User-visible changes should come with an entry in the changelog under the appropriate part of the unreleased section. PR that doesn't provide an entry will fail CI check. This behavior can be overridden by using the "no changelog" label, which is used for changes that are not user-visible.

To run tests execute:

Note that tests require Node.js and Yarn installed.

The lsp server uses merlin under the hood, but users are not required to have merlin installed. We vendor merlin because we currently heavily depend on some implementation details of merlin that make it infeasible to upgrade the lsp server and merlin independently.

The implementation of the lsp protocol itself was taken from facebook's hack

Previously, this lsp server was a part of merlin, until it was realized that the lsp protocol covers a wider scope than merlin.

Note that the comparisons below make no claims of being objective and may be entirely out of date. Also, both servers seem deprecated.

reason-language-server This server supports bucklescript & reason . However, this project does not use merlin which means that it supports fewer versions of OCaml and offers less "smart" functionality - especially in the face of sources that do not yet compile.

ocaml-language-server This project is extremely similar in the functionality it provides because it also reuses merlin on the backend. The essential difference is that this project is written in typescript, while our server is in OCaml. We feel that it's best to use OCaml to maximize the contributor pool.

Releases 44

Contributors 52.

@rgrinberg

Keyboard Shortcuts

Choose Your Preferred Tools

Installed with @sap/cds-dk, cds version, cds init / add, debugging with cds watch, install visual studio code, add cds editor, run services, debug services, restart the server, run a cap notebook, prerequisites, build an image, run a service in a container, features and functions, editor performance, command line client for cds code formatter (beta), usage via cds cli, editor integration, cds lint rules, customization, set up sap business application studio, set up your dev space, tutorials using sap business application studio, command line interface (cli).

To use cds from your command line install @sap/cds-dk globally:

image-20211004184521146

image-20211005034348521

image-20211004184849442

image-20211004185132218

image-20211004185442466

Start cds watch and enter debug . This restarts the application in debug mode. Similarly, debug-brk will start debug mode, but pause the application at the first line, so that you can debug bootstrap code.

If you executed cds watch on a standalone terminal, you can still attach a Node.js debugger to the process.

For example:

Visual Studio Code

annotation lsp help you with debugging

Learn more about the CDS Editor .

To run services, just open the Integrated Terminal in VS Code and use one of the cds serve variants, for example, use cds watch to automatically react on changes.

Alternatively, you can use the preconfigured tasks or launch configurations you get when creating a project with cds init . For example, in the Debug view launch cds run with the green arrow button:

annotation lsp help you with debugging

You can add and stop at breakpoints in your service implementations. For example, add one to line 10 of our srv/cat-service.js by clicking in the gutter as shown here:

annotation lsp help you with debugging

… then send the …/Books request again to stop there.

Restart the server when you did changes to your code using the Debug views restart button:

annotation lsp help you with debugging

A CAP Notebook is a Custom Notebook in Visual Studio Code that serves you as a guide on how to create, navigate, and monitor CAP projects. With this approach, we want to encourage the CAP community to work with CAP in the same explorative manner that scientists work with their data, namely by:

The cell inputs/outputs are especially useful at later points in time when the project’s details have long been forgotten. In addition, notebooks are a good way to share, compare, and also reproduce projects.

annotation lsp help you with debugging

To see which features are available in a CAP Notebook, open our CAP Notebook page : F1 -> CDS: Open CAP Notebooks Page

Magics, or magic commands, known from IPython are conventient functions to solve common problems. To see which line- and cell-magics can be used within a CAP Notebook, run a code cell with %quickref .

Start an empty CAP Notebook by creating a *.capnb file.

Provided that the CDS Editor is installed, the CAP Notebook will be rendered automatically as the file is selected.

Create a file called Dockerfile and add this content for a quick setup:

Build your first image:

You see a $ command prompt from inside the container.

CDS Editors & LSP

The editor powered by the CDS language server implementation, provides source code validation including diagnostics, like error messages and warnings.

The following features are available for all editors based on our language server implementation for CDS in SAP Business Application Studio, Visual Studio Code, and Eclipse. The plugins are available for download for Visual Studio Code at Visual Studio Marketplace and for Eclipse at SAP Development Tools .

Short video about the SAP CDS language support extension for VS Code in action by DJ Adams.

Syntax highlighting

Code completion

Where-used navigation

Code formatting

Inventory (symbols)

Snippets for typical CDS language constructs

With documentation extracts of capire explaining language concepts.

Hover information based on

Translation support

Code formatting settings

These are settings coming with the CDS language server implementation. Use the command CDS: Show Formatting Options Configuration . You see the settings, grouped into three tabs: Alignment , Other , and Whitespace

Format on Type, Format on Paste, and Format on Save in VS Code

These are settings from the editor in VS Code:

Cds: Workspace Validation Mode

Default: ActiveEditorOnly

Keeps track of the active editor in focus. Only changes there are immediately validated.

The ActiveEditorOnly mode is especially useful in situations when navigating through a large model, that is having multiple files open (even if they are not shown as tabs) and editing a file that the others directly or indirectly depend on.

If switched to OpenEditorsAndDirectSources all model files on every change, for example typed character, are recompiled. If switched to OpenEditorsOnly all open files, for example split tabs, are recompiled. For large models, this can lead to high CPU load and high memory load and consequently weak responsiveness of the editor.

Cds > Contributions > Enablement: Odata*

Default: on

This setting enables extended support for annotations, that is refined diagnostics and code completion. Can be switched off for performance gains.

Cds > Workspace: ScanCsn

Default: off

Switch on to scan the workspace also for CSN files, additionally to CDS source files.

Note: CSN files are still considered if used from a CDS source file.

Cds > Quickfix: ImportArtifact

Enable to get quickfix proposals for artifact names, like entities, that aren’t imported via a using statement. For that, all definitions in the workspace need to be considered, which might be slow.

Welcome page

If there are new release notes, this page opens on startup. You can disable this behavior using the CDS > Release Notes: Show Automatically ( cds.releaseNotes.showAutomatically ) setting.

CAP Notebooks Page

This page provides information on all of features available in a CAP Notebook with a brief description and examples on each.

Beautify settings

Preview CDS sources

You want to create a preview of a specific .cds file in your project. You can do that using the command line. Here is how you do it in VS Code:

Visualize CDS file dependencies

Use the command from the context menu on a folder or CDS file.

A selection popup appears to choose one of three modes:

The first option shows every model file on its own. For very large models, the number of files and interdependencies may be too complex to be graphically shown. A message about insufficient memory will appear. In this case use the second option.

The second option reduces the graph by only showing the folders of all involved files and their interdependencies.

Only those files are evaluated that are reachable from the start model where the command was invoked on.

The third option always considers all files in a folder and their dependencies. This can be useful to understand architectural violations.

Example for architectural violation: You want a clean layering in your project: app -> srv -> db . With this option, you can visualize and identify that there is a dependency from a file in the service layer to an annotation file in the application layer.

Hovering over a node will show the number of files involved and the combined size of all involved files. Use this function to get a rough understanding about the complexity and the compilation speed.

The command requires the third-party extension Graphviz (dot) language support for Visual Studio Code (joaompinto.vscode-graphviz). If you haven’t installed it already, it will be suggested to install.

With the following settings you can influence the performance of the editor:

Editor > Goto Location: Alternative Definition Command

Do not select goToReferences . Otherwise, being already on a definition often requires all models to be recompiled.

Workbench > Editor > Limit: Value

If open editors have using dependencies, a change in one editor will lead to a recompile of related editors. To decrease the impact on performance, lower the number.

Workbench > Editor > Limit: Enabled

To enable the limit value above, switch on .

Additional Hints to Increase Performance:

The CDS code formatter provides a command line interface. Use it as a pre-commit hook or within your CI/CD pipeline, to guarantee a consistent formatting.

Installation

Install the CDS language server globally as a library via npm i -g @sap/cds-lsp . A new shell command format-cds is available.

Show help via format-cds -h . This explains all commands and formatting options in detail including the default value for each formatting option.

It is recommended to generate once for each project a settings file ( .cdsprettier.json ) with all default formatting options available. Execute format-cds --init in the project root. An existing file would not be overwritten. To adapt your settings to your preferred style, open the .cdsprettier.json file in VSCode. You get code completion and help for each option. There is also a settings UI in SAP CDS Language Support , reachable via command CDS: Show Formatting Options Configuration . This allows to see the effects of each formatting option on an editable sample source. Commit the .cdsprettier.json file into your version control system.

Use format-cds to format all your CDS source files. The effective set of formatting options is calculated in order of precedence:

It is possible to have .cdsprettier.json files in subfolders. In this case the most relevant settings file per CDS source file is taken.

Use format-cds <foldername1> <foldername2> <filename> ... to restrict the set of CDS source files. By default, backup files with .bak file extension will be created.

Use -f switch to force an overwrite without creating a backup. This is on your own risk. Should there be problems data loss might occur, especially when formatting in a pre-commit hook. Better add .bak to your .gitignore file and not use -f .

CDS Lint & ESlint

In your project’s root folder, execute:

If there are no lint errors, there is no output. Otherwise, a standard ESLint error report will be printed.

Download the standard ESLint extension for Visual Studio Code . CDS Lint seamlessly integrates with it. For SAP Business Application Studio this is preinstalled.

Configure our recommended rules for CDS model files in your project:

This automatically adds the settings for the ESLint VS Code extension to the project’s VS Code settings, installs the CDS ESLint plugin, and adds it to the ESLint configuration of your project.

The CDS Lint rules are a set of generic rules based on CAP best practices. The subset of these we consider most essential is part of the recommended configuration of the @sap/eslint-plugin-cds package.

Rules in ESLint are grouped by type to help you understand their purpose. Each rule has emojis denoting:

✔️ if the plugin’s “recommended” configuration enables the rule

🔧 if problems reported by the rule are automatically fixable ( --fix )

💡 if problems reported by the rule are manually fixable (editor)

Configuring CDS Lint Rules

Individual package rules can also be configured to be turned off or have a different severity. For example, if you want to turn off the recommended environment rule min-node-version , just add the following lines to your ESLint configuration file , shown here for type json :

Using the ESLint CLI

If you want to have more control over the linting process, you can access the CDS ESLint plugin natively via the ESLint CLI . To determine the proper command line options, it can help to refer to the DEBUG="lint" cds lint output, which shows all of the options and flags available.

SAP Business Application Studio

If not already done, set up SAP Business Application Studio on SAP BTP.

Open the SAP BTP cockpit and choose SAP Business Application Studio from the Quick Tool Access section.

Choose Create Dev Space .

Provide a name for your dev space.

Choose Full Stack Cloud Application as the application type.

By selecting Full Stack Cloud Application , your dev space comes with several extensions out of the box that you need to develop CAP applications. For example, CAP Tools, Java Tools, and MTA Tools are built in. This saves setup time. See Developer Guide for SAP Business Application Studio for more details.

The creation of the dev space takes a while. You see that the status for your dev space changes from STARTING to RUNNING . See Dev Space Types for more details.

Once the dev space is running, choose the dev space by clicking on the dev space name.

You’re using a trial version. Any dev space that hasn’t been running for 30 days will be deleted. See the full list of restrictions .

To learn about the features specific to CAP development in the studio, see the guide Developing a CAP Application in SAP Business Application Studio

Set Up SAP Business Application Studio for Development .

.css-9k462n{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;width:11rem;} .css-1082qq3{display:block;width:100%;}

.css-ecb9sr{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;width:16rem;}.

Memory Debugging and Watch Annotations

Memory Debugging and Watch Annotations

Ram profiling has its strengths and weaknesses. the debugger is the perfect complementary tool that translates obtuse statistics to actionable changes.

Shai Almog's photo

.css-nhzlhj{margin-right:1.25rem;height:1.25rem;width:1.25rem;fill:currentColor;opacity:0.5;} .css-ria9xh{margin-right:1.25rem;height:1.25rem;width:1.25rem;fill:currentColor;opacity:0.5;} Table of contents

Memory usage, memory checks.

Before diving into debugging memory issues and the other amazing running process, memory debugging capabilities (which are amazing)... I want to discuss a point I left open in the last duckling post . Back there we discussed customizing the watch renderer. This is super cool!

But it's also tedious. Before we continue, if you prefer, I cover most of these subjects in these videos:

Watch Annotations

Last time we discussed customizing the watch UI to render complex objects more effectively. But there's one problem with that: "We aren't alone".

We're a part of a team. Doing this for every machine is difficult and frustrating. What if you're building a library or an API and want this behavior by default?

That's where JetBrains provides a unique solution: custom annotations. Just annotate your code with hints to the debugger and configuration will be seamless to your entire team/users. In order to do this, we need to add the JetBrains annotations to the project path. You can do it by adding this to the Maven POM file:

Once this is done, we can annotate the class from the previous duckling to achieve the same effect

Notice we need to escape the strings in the annotation so they will be valid Java Strings. We need to escape the quote symbols and use them to write a "proper" string.

Again everything else matches the content and result we saw in the previous duckling.

Memory Debugger

The primary focus of this post is the memory debugging capabilities. By default, JetBrains disables most of these capabilities to boost program execution performance. We can enable the memory debugger view by checking it on the right-hand side of the bottom tool window.

memory-debugging-1.png

Worse. This has such an impact on performance that IntelliJ doesn’t load the actual content of this class until we explicitly click the "Load Classes" button in the center of the memory monitor:

memory-debugging-2.png

As you can imagine, this gets old fast. If your machine is slow, then this is a great thing. But if you have an exceptionally powerful machine, then you might want to turn on "Update Loaded Classes on Debugger Stop":

memory-debugging-4.png

This effectively disables the requirement to click at the cost of slower step over execution. But what do we get as a result?

The panel shows us where a memory block is used when stepping over code or jumping between breakpoints. The memory footprint isn't as obvious, but the scale of memory allocation is.

The diff column is especially useful in tracking issues such as memory leaks. You can get a sense of where a leaking object was allocated and the types of objects that were added between two points. You can get a very low level sense of the memory over time. It's a low level view that's more refined than the profiler view we normally use.

But there's more. We can double click every object on the list and see this:

memory-debugging-5.jpeg

Here we can see all the objects of this type that were allocated in the entire heap. We can get a sense of what's really held in a memory location and again gain deeper insight into potential memory leaks.

"Track New Instances" enables even more tracking of heap allocations. We can enable this on a per object type basis. Notice this only applies to "proper object" and not arrays. You can enable it through the right click:

memory-debugging-6.png

Once we enable this, heap allocations are tracked everywhere. We get backtraces for memory allocations that we can use to narrow down the exact line of code that allocated every object in the heap!

memory-debugging-7.png

The real benefit though is in the enhanced diff capability. When this is enabled, we can differentiate the specific objects allocated at this point. Say you have a block of code that leaks an object of type MyObject . If you enable tracking on MyObject and run between the two breakpoints, you can see every allocation of MyObject performed only in this block of code...

The backtraces for memory allocations are the missing piece that would show you where each of these object instances was allocated. Literal stack traces from the memory allocator!

This is difficult to see sometimes in memory intensive applications. When multiple threads allocate multiple objects in memory, the noise is hard to filter. But of all the tools I used, this is by far the easiest.

One of my favorite things in Java is the lack of real memory errors. There are no invalid memory addresses. No uninitialized memory that leads to invalid memory accesses. No invalid pointers, no memory address (that we're exposed to) or manual configuration. Things "just work".

But there are still pain points that go beyond garbage collection tuning . Heap size is one of the big pain points in Java. It doesn't have to be a leak. Sometimes it's just wastefulness we don't understand. Where does the extra memory go?

The debugger lets us draw a straight line with stack traces directly to the source code line. We can inspect memory contents and get the applicable memory statistics that go well beyond the domain of a profiler. Just to be clear: profilers are great to look at memory in a "big picture" way. The debugger can flesh out that picture with a complete list of for a specific block of code.

Kichwa Coders

Kichwa Coders

Eclipse Experts for Embedded Tools Solutions

Debug Protocol vs Language Server Protocol

It is safe to say that the language server protocol (LSP) is the future of developer tools .  When it comes to the equivalent for debug, the debug protocol is ‘LSP for debuggers’. It is a useful tagline but here are three key differences to be aware of:

All that being said, it’s worth repeating the common benefits of both protocols which are:

Share this:

Leave a reply cancel reply.

Fill in your details below or click an icon to log in:

Gravatar

You are commenting using your WordPress.com account. (  Log Out  /  Change  )

Twitter picture

You are commenting using your Twitter account. (  Log Out  /  Change  )

 width=

You are commenting using your Facebook account. (  Log Out  /  Change  )

Connecting to %s

Notify me of new comments via email.

Notify me of new posts via email.

' src=

Enterprise Java Development in Emacs

My current Java setup utilises lsp-java and dap-java which together gives me a good Java coding and debugging experience in Emacs, including code navigation, auto completion, documentation lookup, on the fly de-compiling 3rd party libraries and remote debugging an application servers.

Everything works pretty much out of the box, once you've wired up lsp-java and dap-java , it'll figure out your Maven project by itself and download whatever it needs in the background. I've used this setup for a good while now on a code base with around 5000 Java files and 65 Maven modules and the performance is impeccable.

You can see my setup in action in the screencast below.

Support for Lombok

Trouble shooting.

If lsp-java doesn't work, a good place to start is to look in the Eclipse server log file:

java.lang.NoSuchMethodError: java.nio.ByteBuffer.limit(I)Ljava/nio/ByteBuffer;

You need to make the language server use Java 9 and it must be OpenJDK:

See this ticket over at jira.mongodb.org for a good explanation of why these errors happen

java.security.InvalidAlgorithmParameterException

The easiest solution is to just use OpenJDK instead of Oracle Java to run the Eclipse server (I just did apt-get install openjdk-9-jdk ), you can set this in your .emacs by setting lsp-java-java-path , see above.

NoClassDefFoundError: javax/annotation/processing/AbstractProcessor

The problem was that by having Lombok on the boot classpath it would interfere with the annotation processing mechanism in the Eclipse server. The remedy was to remove -Xbootclasspath/a:/path/to/lombok-1.16.18.jar and only specify Lombok in the javaagent parameter.

Wipe the slate clean

To remove all generated files and caches related to LSP, do:

This can be worth trying before going mad about something not working.

Also, there's a minimal .emacs config on the lsp-java website that you can try without your own configuration to ensure your problems are not due to your own (combination of) configuration.

Other Java extensions for Emacs I've used

Creative Commons License

Metals works in Emacs thanks to the lsp-mode package (another option is the Eglot package).

Requirements ​

Java 8, 11, 17 provided by OpenJDK or Oracle . Eclipse OpenJ9 is not supported, please make sure the JAVA_HOME environment variable points to a valid Java 8, 11 or 17 installation.

macOS, Linux or Windows . Metals is developed on many operating systems and every PR is tested on Ubuntu, Windows and MacOS.

Scala 2.13, 2.12, 2.11 and Scala 3 . Metals supports these Scala versions:

Scala 2.13 : 2.13.10, 2.13.9, 2.13.8, 2.13.7, 2.13.6, 2.13.5, 2.13.4, 2.13.3

Scala 2.12 : 2.12.17, 2.12.16, 2.12.15, 2.12.14, 2.12.13, 2.12.12, 2.12.11, 2.12.10

Scala 2.11 : 2.11.12

Scala 3 : 3.3.0-RC3, 3.3.0-RC2, 3.2.2, 3.2.1, 3.2.0, 3.1.3, 3.1.2, 3.1.1, 3.1.0, 3.0.2

Note that 2.11.x support is deprecated and it will be removed in future releases. It's recommended to upgrade to Scala 2.12 or Scala 2.13

Installation ​

To use Metals in Emacs, place this snippet in your Emacs configuration (for example .emacs.d/init.el) to load lsp-mode along with its dependencies:

You may need to disable other packages like ensime or sbt server to prevent conflicts with Metals.

Next you have to install metals server. Emacs can do it for you when lsp-mode is enabled in a scala buffer or via lsp-install-server command. Also you can do it manually executing coursier install metals and configuring $PATH variable properly.

Importing a build ​

The first time you open Metals in a new workspace it prompts you to import the build. Type "Import build" or press Tab and select "Import build" to start the installation step.

Import build

Once the import step completes, compilation starts for your open *.scala files.

Once the sources have compiled successfully, you can navigate the codebase with goto definition.

Custom sbt launcher ​

By default, Metals runs an embedded sbt-launch.jar launcher that respects .sbtopts and .jvmopts . However, the environment variables SBT_OPTS and JAVA_OPTS are not respected.

Update the server property -Dmetals.sbt-script=/path/to/sbt to use a custom sbt script instead of the default Metals launcher if you need further customizations like reading environment variables.

Speeding up import ​

The "Import build" step can take a long time, especially the first time you run it in a new build. The exact time depends on the complexity of the build and if library dependencies need to be downloaded. For example, this step can take everything from 10 seconds in small cached builds up to 10-15 minutes in large uncached builds.

Consult the Bloop documentation to learn how to speed up build import.

Importing changes ​

When you change build.sbt or sources under project/ , you will be prompted to re-import the build.

Import sbt changes

Show navigable stack trace ​

You can annotate your stack trace with code lenses (which requires the following bit of configuration mentioned earlier: (lsp-mode . lsp-lens-mode) ). These allow you to run actions from your code.

One of these actions allow you to navigate your stack trace.

You can annotate any stack trace by marking a stack trace with your region and using M-x lsp-metals-analyze-stacktrace on it.

This will open a new Scala buffer that has code lenses annotations: just click on the small "open" annotation to navigate to the source code relative to your stack trace.

This will work as long as the buffer you are marking your stack trace on exists within the project directory tracked by lsp-mode , because lsp-metals-analyze-stacktrace needs the lsp workspace to find the location of your errors.

Note that if you try to do that from sbt-mode , you may get an error unless you patch lsp-find-workspace with the following:

The above shall become unnecessary once this issue is solved.

Reference ​

Manually trigger build import ​

To manually trigger a build import, run M-x lsp-metals-build-import .

Import build command

Run doctor ​

Run M-x lsp-metals-doctor-run to troubleshoot potential configuration problems in your build.

Run doctor command

There is an alternative LSP client called eglot that might be worth trying out if you want to use an alternative to lsp-mode.

To configure Eglot with Metals:

If you start Emacs now then it will fail since the metals-emacs binary does not exist yet.

(optional) It's recommended to enable JVM string de-duplication and provide a generous stack size and memory options.

The -Dmetals.client=emacs flag is important since it configures Metals for usage with Emacs.

Files and Directories to include in your Gitignore ​

The Metals server places logs and other files in the .metals directory. The Bloop compile server places logs and compilation artifacts in the .bloop directory. The Bloop plugin that generates Bloop configuration is added in the metals.sbt file, which is added at project/metals.sbt as well as further project directories depending on how deep *.sbt files need to be supported. To support each *.sbt file Metals needs to create an additional file at ./project/project/metals.sbt relative to the sbt file. Working with Ammonite scripts will place compiled scripts into the .ammonite directory. It's recommended to exclude these directories and files from version control systems like git.

Worksheets ​

Worksheets are a great way to explore an api, try out an idea, or code up an example and quickly see the evaluated expression or result. Behind the scenes worksheets are powered by the great work done in mdoc .

Getting started with Worksheets ​

To get started with a worksheet you can either use the metals.new-scala-file command and select Worksheet or create a file called *.worksheet.sc . This format is important since this is what tells Metals that it's meant to be treated as a worksheet and not just a Scala script. Where you create the script also matters. If you'd like to use classes and values from your project, you need to make sure the worksheet is created inside of your src directory. You can still create a worksheet in other places, but you will only have access to the standard library and your dependencies.

Evaluations ​

After saving you'll see the result of the expression as a comment as the end of the line. You may not see the full result for example if it's too long, so you are also able to hover on the comment to expand.

Keep in mind that you don't need to wrap your code in an object . In worksheets everything can be evaluated at the top level.

Using dependencies in worksheets ​

You are able to include an external dependency in your worksheet by including it in one of the following two ways.

:: is the same as %% in sbt, which will append the current Scala binary version to the artifact name.

You can also import scalac options in a special $scalac import like below:

Running scalafix rules ​

Scalafix allows users to specify some refactoring and linting rules that can be applied to your codebase. Please checkout the scalafix website for more information.

Since Metals v0.11.7 it's now possible to run scalafix rules using a special command metals.scalafix-run . This should run all the rules defined in your .scalafix.conf file. All built-in rules and the community hygiene ones can be run without any additional settings. However, for all the other rules users need to add an additional dependency in the metals.scalafixRulesDependencies user setting. Those rules need to be in form of strings such as com.github.liancheng::organize-imports:0.6.0 , which follows the same convention as coursier dependencies .

A sample scalafix configuration can be seen below:

February 7, 2021

Configuring emacs for rust development.

annotation lsp help you with debugging

Rust support in Emacs improved a lot during the past two years. This post will walk you through setting up Emacs to allow for:

This setup will be based on rust-analyzer , a LSP server that is under very active development and powers the Rust support in VS Code as well.

This post is accompanied by a github repository that you can use as a reference or directly checkout and run Emacs with ( see below ). I’ve tested the configuration with Emacs 28.2, rust stable 1.66.0 and on macOS 13.1, Ubuntu 22.04 and Windows 10.

For a setup that uses the emacs-racer backend 1 please see David Crook’s guide .

Table of Contents

Rust-analyzer, lsp-mode and lsp-ui-mode, code navigation, code actions, code completion and snippets, inline errors, inline type hints, rust playground, additional packages.

Changes made to this guide:

If you have already Rust and Emacs installed (see prerequisites ) you can get quickly up and running without modifying any of your existing configuration. The rksm/emacs-rust-config github repo contains a standalone.el file that you can use to start Emacs with:

This will start Emacs with an .emacs.d directory set to the location of the checked out repository (and a different elpa directory). This means, it will not use or modify your $HOME/.emacs.d . If you are unsure if you are happy with what is described here, this is an easy way to figure it out.

All dependencies will be installed on first startup, this means the first start will take a few seconds.

On windows you can use a shortcut to start Emacs with those parameters. If you are on macOS and have installed the Emacs.app you will need to start Emacs from the command line with:

Prerequisites

Before we get to the actual Emacs configuration, please make sure your system is setup with the following.

Install the rust toolchain with cargo. rustup makes that easy. Install rust stable and make sure that the .cargo/bin directory is in your PATH. rustup will do this by default. rust-analyzer will also need the Rust source code and you can install that with rustup component add rust-src .

You need the rust-analyzer server binary. You can install it following the rust analyzer manual , pre-compiled binaries are available. However, since rust-analyzer is so actively developed I usually just clone the github repository and build the binary myself. This makes upgrading (and downgrading should it be necessary) very straightforward.

I’ve heard that (very) occasionally the most recent version might not work. In that case I can recommend to take a look at the rust-analyzer changelog which contains links to a git commit for each weeks update. In case you run into trouble, build from an earlier version, which will likely succeed.

Please also make sure that your emacs packages are up-to-date, in particular lsp-mode and rustic-mode to ensure that the newest rust-analyzer features are supported.

If you need to run with an older version of rust-analyzer you can look for an older release tag with git tag and then build against it like:

This should work with older versions of the relevant emacs packages as well.

I have tested the setup with Emacs 28.2. On macOS I normally use emacsformacosx.com . On Windows I use the “nearby GNU mirror” link at gnu.org/software/emacs . On Ubuntu adding another apt repository is necessary. Note that the config will likely work with older Emacs versions but Emacs 27 got substantial improvements around JSON parsing which speeds up LSP clients quite a bit.

Note that I use use-package for Emacs package management. It will be auto-installed in the standalone version of this config. Otherwise you can add a snippet like below to your init.el :

Rust Emacs Configuration in Detail

The essential modes being used are the following:

rustic is an extension of rust-mode which adds a number of useful features (see the its github readme) to it. It is the core of the setup and you can use just it without any other Emacs packages (and without rust-analyzer) if you just want code highlighting, compilation and cargo commands bound to emacs shortcuts, and a few other features.

Most of rustics features are bound to the C-c C-c prefix (that is press Control-c twice and then another key):

annotation lsp help you with debugging

You can use C-c C-c C-r to run the program via cargo run . You will be asked for parameters and can for example specify --release to run in release mode or --bin other-bin to run the target named “other-bin” (instead of main.rs). To pass parameters to the executable itself use -- --arg1 --arg2 .

The shortcut C-c C-c C-c will run the test at point. Very handy to run inline tests and to not always have switch back-and-forth between a terminal and Emacs.

C-c C-p opens a popup buffer that will give you similar access to the commands shown above but will stick around.

Rustic provides even more helpful integration with cargo, e.g. M-x rustic-cargo-add will allow you to add dependencies to your projects Cargo.toml (via cargo-edit that will be installed on demand).

If you would like to share a code snippet with others, M-x rustic-playpen will open your current buffer in https://play.rust-lang.org where you can run the Rust code online and get a shareable link.

Code formatting on save is enabled and will use rustfmt. To disable it set (setq rustic-format-on-save nil) . You can still format a buffer on demand using C-c C-c C-o .

lsp-mode provides the integration with rust-analyzer. It enables the IDE features such as navigating through source code, highlighting errors via flycheck (see below) and provides the auto-completion source for company (also below).

lsp-ui is optional. It provides inline overlays over the symbol at point and enables code fixes at point. If you find it to flashy and prefer not activating it just remove :config (add-hook 'lsp-mode-hook 'lsp-ui-mode) .

The config shown above already disables the documentation normally shown inline by lsp-ui. This is too much for my taste as it often covers up source code. If you want to also deactivate the documentation shown in the minibuffer you can add (setq lsp-eldoc-hook nil) . To do less when your cursor moves consider (setq lsp-signature-auto-activate nil) and (setq lsp-enable-symbol-highlighting nil) .

lsp-mode will try to figure out the project directory that rust-analyzer should use to index the project. When you first open a file inside a new project, you will be asked which directory to import:

annotation lsp help you with debugging

The first selection (press i) will use the directory in which a Cargo.toml file was found, potentially the workspace root if the crate you work on is inside a workspace. The second selection (I) will allow you to select the root project directory manually. The last selection (n) will prevent lsp from starting. Sometimes when you follow a reference to a library crate it can be useful to not enable lsp for it as the lsp startup and indexing well take some time for large code bases.

lsp-mode remembers the choice for the project directory so the next time you open a file of a known project you do not need to make that selection again. If you ever want that, you can invoke the lsp-workspace-folders-remove command to interactively remove directories from the list of known projects.

Should you ever see an error such as LSP :: rust-analyzer failed to discover workspace when trying to open a .rs file, try invoking the lsp-workspace-folders-add manually and add the root project directory.

Having setup lsp-mode, you can use M-. to jump to the definition of function, structs, packages etc. when your cursor is over a symbol. M-, to jump back. With M-? you can list all references of a symbol. A little demo:

annotation lsp help you with debugging

With M-j you can open up an outline of the current module that allows you to quickly navigate between functions and other definitions.

annotation lsp help you with debugging

Refactorings are possible using M-x lsp-rename and lsp-execute-code-action . Code actions are basically code transformation and fixes. For example the linter may find a way to express code more idiomatically:

The number of available code actions continuously growth. A full list is available in the rust-analyzer documentation . Favorites include automatically importing functions or fully qualifying symbols. E.g. in a module that does not yet use HashMap, type HashMap and then select the option to Import std::collections::HashMap . Other code actions allow you to add all possible arms in a match expression or converting a #[derive(Trait)] into the code needed for a custom implementation. And many, many more.

If you developing macros, quickly seeing how they expand can be really useful. Use M-x lsp-rust-analyzer-expand-macro or the shortcut C-c C-c e to macroexpand.

lsp-mode directly integrates with company-mode , a completion framework for emacs. It will display a list of possible symbols that could be inserted at the cursor. It is very helpful when working with unknown libraries (or the std lib) and reduces the need for looking up documentation. Rust’s type system is used as a source for the completions and thus what you can insert makes (mostly) sense.

By default the code completion popup will appear after company-idle-delay which is set to 0.5 seconds by default. You can modify that value or disable the auto popup completely by setting company-begin-commands to nil .

This will also enable code snippets via yasnippet . I have added the list of my most commonly used snippets to the github repository . Feel free to copy and modify them. They work by typing a certain character sequence and then pressing TAB. For example for<TAB> will expand into a for loop. You can customize what is being pre-filled and the number of stops while expanding and even run custom elisp code. See the yasnippet documentation.

To enable snippet expansion, code completion and indentation when you press the TAB key, we need to customize the command that is running when pressing TAB:

My most commonly used snippets are for , log , ifl , match and fn .

That one is easy, rustic does the heavy lifting. We just need to make sure flycheck is being loaded:

You can display a list of errors and warnings using M-x flycheck-list-errors or by pressing C-c C-c l .

Rust-analyzer and lsp-mode are able to show inline type annotations . Normally, those would appear via eldoc when placing the cursor over the defined variable, with the annotations you will always see the inferred types. Use (setq lsp-rust-analyzer-server-display-inlay-hints t) to enable them. To actually insert an inferred type into the source code you can move your cursor over the defined variable and run M-x lsp-execute-code-action or C-c C-c a .

Note that they might not interact well with lsp-ui-sideline-mode . If you prefer the hints but want to disable sideline mode, you can add (lsp-ui-sideline-enable nil) to a rustic-mode-hook .

As of the rust-analyzer and lsp-mode versions of 2022-03-24 there are even more kinds of inline hints available which now include lifetime hints, intermediate types in method chains and more!

Emacs integrates with gdb and lldb via the dap-mode package 2 . In order to setup debugging support for Rust, you will need to do some additional setup and build steps. In particular, you will need to have lldb-mi which is not part of the official llvm distribution that Apple provides via XCode.

I only tested building lldb-mi on macOS. Here is how I got it working:

In order to have Emacs find that executable you will need to make sure exec-path is setup correctly at startup. The full dap-mode config looks like this:

(dap-gdb-lldb-setup) will install a VS Code extension into user-emacs-dir / .extension/vscode/webfreak.debug . One problem I observed was that this installation is not always successful. Should you end up without a “ webfreak.debug ” directory you might need to delete the vscode/ folder and run (dap-gdb-lldb-setup) again.

I also needed to run sudo DevToolsSecurity --enable once to allow the debugger access to processes.

Additionally I ran into another issue. When starting the debug target I would see:

annotation lsp help you with debugging

Even though lldb-mi was on my path and I could start it from within Emacs. It turns out that the error does not come from lldb-mi but from the path to the target you start with. When you start debugging with M-x dap-debug or via dap-hydra d d , after you select Rust::LLDB Run Configuration make sure that the path to the target executable you want to debug is not a relative path and does not containt ~ . If it’s an absolute path it should work.

Example that would fail with above error (note the unexpanded ~/ ):

annotation lsp help you with debugging

I needed to specify the full path /Users/robert/projects/rust/emacs/test-project/target/debug/test-project .

Once it is working it should look like that:

In that example I first activate dab-hydra with C-c C-c d . I then select a Rust debug target (that I build using cargo before) with d d . Before I already set a breakpoint with b p . I then step through and into the code with n and i . Note that you can also use the mouse to set breakpoints and step.

Setting up debugging is still not as smooth as it could be but once it is running it is a joy!

You probably have seen the online Rust playground https://play.rust-lang.org/ that quickly allows you to run and share snippets of Rust code. A somewhat similar project for Emacs is grafov/rust-playground which allows you to quickly create (and remove) Rust scratch projects. By default, the rust-playground command will create Rust project directories at ~/.emacs.d/rust-playground/ and open up main.rs with keybindings to quickly run the project ( C-c C-c ). This is very handy if you want to quicky test a Rust code snippet or tryout a library. All from the comfort of your own editor!

I will not cover it here but there are a number of other emacs packages that will improve the Emacs developing experience for Rust and other languages vastly. Just some pointers:

Thanks to all the package maintainers!

Last but not least a big Thank You! to all the people developing and maintaining the open source software referenced here. The rust-analyzer project is amazing and has improved the state of Rust Emacs tooling considerably. That of course would not be half as useful without lsp-mode and lsp-ui. rustic simplifies a lot of the otherwise necessary configuration around rust-mode and adds very helpful features. Company and flycheck are my defaults for other language modes anyway. And of course also thanks to all the Emacs maintainers and everyone I forgot who had a hand in all this!

Racer used to be the best option for getting IDE features (code navigation etc) into Emacs. It is a non-LSP solution which is still faster than RLS and rust-analyzer. However, the number of features especially around code completion are not up to par with rust-analyzer anymore.  ↩︎

Emacs has also built-in support for gdb via GUD but needs to control the gdb process directly. DAP is more similar to LSP in that it is used to control a debugging process remotely and makes it easier for editors to integrate with.  ↩︎

© Robert Krahn 2009-2022

IMAGES

  1. Annotation Help Sheet

    annotation lsp help you with debugging

  2. Lsp diagnostics remain stuck! Can someone please help me out Description of the issue: even if i

    annotation lsp help you with debugging

  3. Brave: Need some clarification on Remote Debugging option.

    annotation lsp help you with debugging

  4. What Is USB Debugging? Should You Enable It On your Android Phone?

    annotation lsp help you with debugging

  5. Brave: Need some clarification on Remote Debugging option.

    annotation lsp help you with debugging

  6. Maintaining annotations in CAP CDS projects becomes easier with new LSP features

    annotation lsp help you with debugging

VIDEO

  1. Ocean Lullaby

  2. DRAM 1301: Introduction to the John Anthony Theater at Collin College

  3. How to Create Professional Business Flyer Design in Photoshop

  4. New Comedy Highlight Chusara Guys😍 Check pin msg

  5. Help Yuliya Titarenko from Kharkiv

  6. 😱 खतरनाक घटनाएं It's Earth #shorts #youtubeshorts #facts #shortvideo #viral #ytshorts

COMMENTS

  1. Maintaining annotations in CAP CDS projects becomes ...

    Ever since we released the XML annotation LSP as part of SAP Fiori ... that helps you add and edit OData annotations applied to CDS models

  2. Tips and Tricks for Working with Annotations in CAP Projects using

    In previous posts, I described how some basic features of the Annotation LSP assist you in building SAP Fiori applications UIs with

  3. LSP4E in Eclipse IDE: more for Language (and debug) Servers!

    The debug protocol provides a generic interface based on JSON between the Client (IDE or editor) and the Server (the debugger or an adapter to the debugger).

  4. Language Server Protocol Specification

    The client can for example honor or ignore the selection direction to make LSP request consistent with features implemented internally.

  5. ocaml/ocaml-lsp: OCaml Language Server Protocol implementation

    http://github.com/ocaml/ocaml-lsp.git $ cd ocaml-lsp $ make install

  6. capire

    Use cds help <command> or cds <command> ? to get specific help.

  7. Memory Debugging and Watch Annotations

    The primary focus of this post is the memory debugging capabilities. By default, JetBrains disables most of these capabilities to boost program

  8. Debug Protocol vs Language Server Protocol

    It is safe to say that the language server protocol (LSP) is the future of developer tools. When it comes to the equivalent for debug

  9. Enterprise Java Development in Emacs

    My current Java setup utilises lsp-java and dap-java which together gives me a good Java coding and debugging experience in Emacs

  10. Emacs

    You can annotate your stack trace with code lenses (which requires the following bit of configuration mentioned earlier: (lsp-mode . lsp-lens-mode) ). These

  11. Configuring Emacs for Rust development

    Rust-analyzer and lsp-mode are able to show inline type annotations. Normally, those would appear via eldoc when placing the cursor over the