Write code with Code with natural speech natural speech

The open-source voice assistant for developers.

With Serenade, you can write code using natural speech. Serenade's speech-to-code engine is designed for developers from the ground up and fully open-source.

Take a break from typing

Give your hands a break without missing a beat. Whether you have an injury or you're looking to prevent one, Serenade can help you be just as productive without typing at all.

Laptop with speech icon

Secure, fast speech-to-code

Serenade can run in the cloud, to minimize impact on your system's resources, or completely locally, so all of your voice commands and source code stay on-device. It's up to you, and everything is open-source.

Serenade Pro logo

Add voice to any application

Serenade integrates with your existing tools—from writing code with VS Code to messaging with Slack—so you don't have to learn an entirely new workflow. And, Serenade provides you with the right speech engine to match what you're editing, whether that's code or prose.

iTerm2

Code more flexibly

Don't get stuck at your keyboard all day. Break up your workflow by using natural voice commands without worrying about syntax, formatting, and symbols.

Customize your workflow

Create powerful custom voice commands and plugins using Serenade's open protocol, and add them to your workflow. Or, try customizations shared by the Serenade community.

Start coding with voice today

Ready to supercharge your workflow with voice? Download Serenade for free and start using speech alongside typing, or leave your keyboard behind.

DEV Community

DEV Community

b4rtaz

Posted on May 26, 2021

Programming by Voice for Visual Studio Code

Hi guys! I've just released my new extension for Visual Studio Code: Voice Assistant . It allows you to put code snippets by voice into your code. You can prepare different snippets per each project. Also it supports multiple VSC windows. You may use it for any programing language!

( check this example with voice 🔉 )

🚀 How to Run?

  • Install this extension from the marketplace .
  • Install & run a server . The server is necessary, because it does all speech recognition job. Currently we support only Windows. 💾 Download server for Windows (it requires .NET5 )
  • Add voice-assistant.json file to root directory of your project and click "Reload definition". You may use this example file or click "Add example voice-assistant.json" button.
  • That's it! 🎤

Give me your feedback please. ✌

Top comments (0)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

margareteldridge profile image

Women in Tech: Elevate Conference by Girl Geek X March 7 and 8, 2024

margareteldridge - Feb 28

evansifyke profile image

What is Devin AI? The world's first AI software engineer everyone is talking about

Melbite blogging Platform - Mar 15

ahaque98 profile image

What is LOGGING in .NET [8.0] MVC?

Ahsanul Haque - Feb 27

imrankh13332994 profile image

Best Practices for React Development

Imran Khan - Mar 18

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Write code without the keyboard

Difficulty typing? Use your voice to code without spelling things out by talking with GitHub Copilot.

Copilot Voice was formerly known as "Hey, GitHub!".

Scatterplot demonstrating distribution of age versus ticket price for the Titanic. An illustration of the output of the program.

  • go to line 34
  • go to method X
  • go to next block

Type less, code more

Write and edit code, navigate the codebase, and control Visual Studio Code with your voice.

SpeechRecognition 3.10.1

pip install SpeechRecognition Copy PIP instructions

Released: Dec 6, 2023

Library for performing speech recognition, with support for several engines and APIs, online and offline.

Project links

  • Open issues:

View statistics for this project via Libraries.io , or by using our public dataset on Google BigQuery

License: BSD License (BSD)

Author: Anthony Zhang (Uberi)

Tags speech, recognition, voice, sphinx, google, wit, bing, api, houndify, ibm, snowboy

Requires: Python >=3.8

Maintainers

Avatar for Anthony.Zhang from gravatar.com

Classifiers

  • 5 - Production/Stable
  • OSI Approved :: BSD License
  • MacOS :: MacOS X
  • Microsoft :: Windows
  • POSIX :: Linux
  • Python :: 3
  • Python :: 3.8
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Multimedia :: Sound/Audio :: Speech
  • Software Development :: Libraries :: Python Modules

Project description

Latest Version

UPDATE 2022-02-09 : Hey everyone! This project started as a tech demo, but these days it needs more time than I have to keep up with all the PRs and issues. Therefore, I’d like to put out an open invite for collaborators - just reach out at me @ anthonyz . ca if you’re interested!

Speech recognition engine/API support:

Quickstart: pip install SpeechRecognition . See the “Installing” section for more details.

To quickly try it out, run python -m speech_recognition after installing.

Project links:

Library Reference

The library reference documents every publicly accessible object in the library. This document is also included under reference/library-reference.rst .

See Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst .

You have to install Vosk models for using Vosk. Here are models avaiable. You have to place them in models folder of your project, like “your-project-folder/models/your-vosk-model”

See the examples/ directory in the repository root for usage examples:

First, make sure you have all the requirements listed in the “Requirements” section.

The easiest way to install this is using pip install SpeechRecognition .

Otherwise, download the source distribution from PyPI , and extract the archive.

In the folder, run python setup.py install .

Requirements

To use all of the functionality of the library, you should have:

The following requirements are optional, but can improve or extend functionality in some situations:

The following sections go over the details of each requirement.

The first software requirement is Python 3.8+ . This is required to use the library.

PyAudio (for microphone users)

PyAudio is required if and only if you want to use microphone input ( Microphone ). PyAudio version 0.2.11+ is required, as earlier versions have known memory management bugs when recording from microphones in certain situations.

If not installed, everything in the library will still work, except attempting to instantiate a Microphone object will raise an AttributeError .

The installation instructions on the PyAudio website are quite good - for convenience, they are summarized below:

PyAudio wheel packages for common 64-bit Python versions on Windows and Linux are included for convenience, under the third-party/ directory in the repository root. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the repository root directory .

PocketSphinx-Python (for Sphinx users)

PocketSphinx-Python is required if and only if you want to use the Sphinx recognizer ( recognizer_instance.recognize_sphinx ).

PocketSphinx-Python wheel packages for 64-bit Python 3.4, and 3.5 on Windows are included for convenience, under the third-party/ directory . To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the SpeechRecognition folder.

On Linux and other POSIX systems (such as OS X), follow the instructions under “Building PocketSphinx-Python from source” in Notes on using PocketSphinx for installation instructions.

Note that the versions available in most package repositories are outdated and will not work with the bundled language data. Using the bundled wheel packages or building from source is recommended.

Vosk (for Vosk users)

Vosk API is required if and only if you want to use Vosk recognizer ( recognizer_instance.recognize_vosk ).

You can install it with python3 -m pip install vosk .

You also have to install Vosk Models:

Here are models avaiable for download. You have to place them in models folder of your project, like “your-project-folder/models/your-vosk-model”

Google Cloud Speech Library for Python (for Google Cloud Speech API users)

Google Cloud Speech library for Python is required if and only if you want to use the Google Cloud Speech API ( recognizer_instance.recognize_google_cloud ).

If not installed, everything in the library will still work, except calling recognizer_instance.recognize_google_cloud will raise an RequestError .

According to the official installation instructions , the recommended way to install this is using Pip : execute pip install google-cloud-speech (replace pip with pip3 if using Python 3).

FLAC (for some systems)

A FLAC encoder is required to encode the audio data to send to the API. If using Windows (x86 or x86-64), OS X (Intel Macs only, OS X 10.6 or higher), or Linux (x86 or x86-64), this is already bundled with this library - you do not need to install anything .

Otherwise, ensure that you have the flac command line tool, which is often available through the system package manager. For example, this would usually be sudo apt-get install flac on Debian-derivatives, or brew install flac on OS X with Homebrew.

Whisper (for Whisper users)

Whisper is required if and only if you want to use whisper ( recognizer_instance.recognize_whisper ).

You can install it with python3 -m pip install git+https://github.com/openai/whisper.git soundfile .

Whisper API (for Whisper API users)

The library openai is required if and only if you want to use Whisper API ( recognizer_instance.recognize_whisper_api ).

If not installed, everything in the library will still work, except calling recognizer_instance.recognize_whisper_api will raise an RequestError .

You can install it with python3 -m pip install openai .

Troubleshooting

The recognizer tries to recognize speech even when i’m not speaking, or after i’m done speaking..

Try increasing the recognizer_instance.energy_threshold property. This is basically how sensitive the recognizer is to when recognition should start. Higher values mean that it will be less sensitive, which is useful if you are in a loud room.

This value depends entirely on your microphone or audio data. There is no one-size-fits-all value, but good values typically range from 50 to 4000.

Also, check on your microphone volume settings. If it is too sensitive, the microphone may be picking up a lot of ambient noise. If it is too insensitive, the microphone may be rejecting speech as just noise.

The recognizer can’t recognize speech right after it starts listening for the first time.

The recognizer_instance.energy_threshold property is probably set to a value that is too high to start off with, and then being adjusted lower automatically by dynamic energy threshold adjustment. Before it is at a good level, the energy threshold is so high that speech is just considered ambient noise.

The solution is to decrease this threshold, or call recognizer_instance.adjust_for_ambient_noise beforehand, which will set the threshold to a good value automatically.

The recognizer doesn’t understand my particular language/dialect.

Try setting the recognition language to your language/dialect. To do this, see the documentation for recognizer_instance.recognize_sphinx , recognizer_instance.recognize_google , recognizer_instance.recognize_wit , recognizer_instance.recognize_bing , recognizer_instance.recognize_api , recognizer_instance.recognize_houndify , and recognizer_instance.recognize_ibm .

For example, if your language/dialect is British English, it is better to use "en-GB" as the language rather than "en-US" .

The recognizer hangs on recognizer_instance.listen ; specifically, when it’s calling Microphone.MicrophoneStream.read .

This usually happens when you’re using a Raspberry Pi board, which doesn’t have audio input capabilities by itself. This causes the default microphone used by PyAudio to simply block when we try to read it. If you happen to be using a Raspberry Pi, you’ll need a USB sound card (or USB microphone).

Once you do this, change all instances of Microphone() to Microphone(device_index=MICROPHONE_INDEX) , where MICROPHONE_INDEX is the hardware-specific index of the microphone.

To figure out what the value of MICROPHONE_INDEX should be, run the following code:

This will print out something like the following:

Now, to use the Snowball microphone, you would change Microphone() to Microphone(device_index=3) .

Calling Microphone() gives the error IOError: No Default Input Device Available .

As the error says, the program doesn’t know which microphone to use.

To proceed, either use Microphone(device_index=MICROPHONE_INDEX, ...) instead of Microphone(...) , or set a default microphone in your OS. You can obtain possible values of MICROPHONE_INDEX using the code in the troubleshooting entry right above this one.

The program doesn’t run when compiled with PyInstaller .

As of PyInstaller version 3.0, SpeechRecognition is supported out of the box. If you’re getting weird issues when compiling your program using PyInstaller, simply update PyInstaller.

You can easily do this by running pip install --upgrade pyinstaller .

On Ubuntu/Debian, I get annoying output in the terminal saying things like “bt_audio_service_open: […] Connection refused” and various others.

The “bt_audio_service_open” error means that you have a Bluetooth audio device, but as a physical device is not currently connected, we can’t actually use it - if you’re not using a Bluetooth microphone, then this can be safely ignored. If you are, and audio isn’t working, then double check to make sure your microphone is actually connected. There does not seem to be a simple way to disable these messages.

For errors of the form “ALSA lib […] Unknown PCM”, see this StackOverflow answer . Basically, to get rid of an error of the form “Unknown PCM cards.pcm.rear”, simply comment out pcm.rear cards.pcm.rear in /usr/share/alsa/alsa.conf , ~/.asoundrc , and /etc/asound.conf .

For “jack server is not running or cannot be started” or “connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)” or “attempt to connect to server failed”, these are caused by ALSA trying to connect to JACK, and can be safely ignored. I’m not aware of any simple way to turn those messages off at this time, besides entirely disabling printing while starting the microphone .

On OS X, I get a ChildProcessError saying that it couldn’t find the system FLAC converter, even though it’s installed.

Installing FLAC for OS X directly from the source code will not work, since it doesn’t correctly add the executables to the search path.

Installing FLAC using Homebrew ensures that the search path is correctly updated. First, ensure you have Homebrew, then run brew install flac to install the necessary files.

To hack on this library, first make sure you have all the requirements listed in the “Requirements” section.

To install/reinstall the library locally, run python setup.py install in the project root directory .

Before a release, the version number is bumped in README.rst and speech_recognition/__init__.py . Version tags are then created using git config gpg.program gpg2 && git config user.signingkey DB45F6C431DE7C2DCD99FF7904882258A4063489 && git tag -s VERSION_GOES_HERE -m "Version VERSION_GOES_HERE" .

Releases are done by running make-release.sh VERSION_GOES_HERE to build the Python source packages, sign them, and upload them to PyPI.

To run all the tests:

Testing is also done automatically by TravisCI, upon every push. To set up the environment for offline/local Travis-like testing on a Debian-like system:

FLAC Executables

The included flac-win32 executable is the official FLAC 1.3.2 32-bit Windows binary .

The included flac-linux-x86 and flac-linux-x86_64 executables are built from the FLAC 1.3.2 source code with Manylinux to ensure that it’s compatible with a wide variety of distributions.

The built FLAC executables should be bit-for-bit reproducible. To rebuild them, run the following inside the project directory on a Debian-like system:

The included flac-mac executable is extracted from xACT 2.39 , which is a frontend for FLAC 1.3.2 that conveniently includes binaries for all of its encoders. Specifically, it is a copy of xACT 2.39/xACT.app/Contents/Resources/flac in xACT2.39.zip .

Please report bugs and suggestions at the issue tracker !

How to cite this library (APA style):

Zhang, A. (2017). Speech Recognition (Version 3.8) [Software]. Available from https://github.com/Uberi/speech_recognition#readme .

How to cite this library (Chicago style):

Zhang, Anthony. 2017. Speech Recognition (version 3.8).

Also check out the Python Baidu Yuyin API , which is based on an older version of this project, and adds support for Baidu Yuyin . Note that Baidu Yuyin is only available inside China.

Copyright 2014-2017 Anthony Zhang (Uberi) . The source code for this library is available online at GitHub .

SpeechRecognition is made available under the 3-clause BSD license. See LICENSE.txt in the project’s root directory for more information.

For convenience, all the official distributions of SpeechRecognition already include a copy of the necessary copyright notices and licenses. In your project, you can simply say that licensing information for SpeechRecognition can be found within the SpeechRecognition README, and make sure SpeechRecognition is visible to users if they wish to see it .

SpeechRecognition distributes source code, binaries, and language files from CMU Sphinx . These files are BSD-licensed and redistributable as long as copyright notices are correctly retained. See speech_recognition/pocketsphinx-data/*/LICENSE*.txt and third-party/LICENSE-Sphinx.txt for license details for individual parts.

SpeechRecognition distributes source code and binaries from PyAudio . These files are MIT-licensed and redistributable as long as copyright notices are correctly retained. See third-party/LICENSE-PyAudio.txt for license details.

SpeechRecognition distributes binaries from FLAC - speech_recognition/flac-win32.exe , speech_recognition/flac-linux-x86 , and speech_recognition/flac-mac . These files are GPLv2-licensed and redistributable, as long as the terms of the GPL are satisfied. The FLAC binaries are an aggregate of separate programs , so these GPL restrictions do not apply to the library or your programs that use the library, only to FLAC itself. See LICENSE-FLAC.txt for license details.

Project details

Release history release notifications | rss feed.

Dec 6, 2023

Mar 13, 2023

Dec 4, 2022

Dec 5, 2017

Jun 27, 2017

Apr 13, 2017

Mar 11, 2017

Jan 7, 2017

Nov 21, 2016

May 22, 2016

May 11, 2016

May 10, 2016

Apr 9, 2016

Apr 4, 2016

Apr 3, 2016

Mar 5, 2016

Mar 4, 2016

Feb 26, 2016

Feb 20, 2016

Feb 19, 2016

Feb 4, 2016

Nov 5, 2015

Nov 2, 2015

Sep 2, 2015

Sep 1, 2015

Aug 30, 2015

Aug 24, 2015

Jul 26, 2015

Jul 12, 2015

Jul 3, 2015

May 20, 2015

Apr 24, 2015

Apr 14, 2015

Apr 7, 2015

Apr 5, 2015

Apr 4, 2015

Mar 31, 2015

Dec 10, 2014

Nov 17, 2014

Sep 11, 2014

Sep 6, 2014

Aug 25, 2014

Jul 6, 2014

Jun 10, 2014

Jun 9, 2014

May 29, 2014

Apr 23, 2014

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages .

Source Distribution

Uploaded Dec 6, 2023 Source

Built Distribution

Uploaded Dec 6, 2023 Python 2 Python 3

Hashes for SpeechRecognition-3.10.1.tar.gz

Hashes for speechrecognition-3.10.1-py2.py3-none-any.whl.

  • português (Brasil)

Supported by

speech to text vscode

speech to text vscode

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have installed Vs Code speech from microsoft in vscode, and I see a mic icon in the github copilot's chat section, but its not listening to the speech and constantly stays in "listening..." state unless you stop it. #204491

@bpasero

rishabhxcode commented Feb 6, 2024

@VSCodeTriageBot

bpasero commented Feb 7, 2024

Sorry, something went wrong.

@bpasero

s3gfaultx commented Feb 10, 2024 • edited

  • 🚀 1 reaction

bpasero commented Feb 11, 2024

  • ❤️ 1 reaction

s3gfaultx commented Feb 16, 2024

Bpasero commented feb 17, 2024.

@VSCodeTriageBot

VSCodeTriageBot commented Feb 25, 2024

No branches or pull requests

@bpasero

IMAGES

  1. GitHub

    speech to text vscode

  2. GitHub

    speech to text vscode

  3. Writing in Visual Studio Code

    speech to text vscode

  4. 30 Amazing VS Code Extensions

    speech to text vscode

  5. JavaScript Text To Speech Converter

    speech to text vscode

  6. The Function of Text-To-Speech Generator

    speech to text vscode

VIDEO

  1. Text to speech be like. #texttospeech #roblox

  2. A quick demo of VSCode Speech

  3. 🔴 Release Party v1.87

  4. Math On Text (VSCode Extension)

  5. How to make text to speech win form project in C#

  6. [Hebrew Webinar] Introducing VS Code, and Moving from Studio

COMMENTS

  1. VS Code Speech

    Speech extension for Visual Studio Code. The Speech extension for Visual Studio Code adds speech-to-text capabilities to Visual Studio Code. No internet connection is required, the voice audio data is processed locally on your computer. For example, you can use this extension anywhere VS Code offers chat capabilities such as with GitHub Copilot ...

  2. VS Code Speech · microsoft/vscode Wiki · GitHub

    Contribute to microsoft/vscode development by creating an account on GitHub. Visual Studio Code. Contribute to microsoft/vscode development by creating an account on GitHub. ... The VS Code Speech extension adds speech-to-text capabilities to the chat interfaces in Visual Studio Code. No internet connection is required, the voice audio data is ...

  3. GitHub Copilot Voice

    GitHub Copilot Voice provides voice-first UX to control VScode and make GitHub copilot accessible to even more developers. The simplest way of getting started is to say, Copilot followed by a command. ... In active mode, developers do not have to say, "Copilot" to activate the speech-to-text service. The extension continuously listens and ...

  4. New in VS Code: Voice Dictation, Improved Copilot AI

    By David Ramel. 02/29/2024. Microsoft continued to enhance speech functionality in its Visual Studio Code editor with a new accessibility tool that lets developers dictate directly into the editor. The voice dictation feature in the February 2024 update, bringing the tool to version 1.87, follows the ability to kick off a Copilot Chat session ...

  5. olefjaerestad/vscode-speech-to-text

    cmd/ctrl+shift+p to open the command palette.; Run the Speech to Text: Dictate command. This will start a web server at localhost:9000 and a WebSocket server at localhost:9001.; localhost:9000 will automatically open in your default browser. If it doesn't, open it manually in Chrome or another browser that supports the Web Speech API.This will connect to the WebSocket server and also ask for ...

  6. Serenade

    Add voice to any application. Serenade integrates with your existing tools—from writing code with VS Code to messaging with Slack—so you don't have to learn an entirely new workflow. And, Serenade provides you with the right speech engine to match what you're editing, whether that's code or prose. Python VS Code JavaScript Chrome Markdown ...

  7. pedrooaugusto/speech-to-code: Speech to Code

    Webapp, Server and Client: are responsible for the application UI, capture audio and transform audio into text. Spoken: is responsible for testing if a given phrase is a valid voice command and to extract important information out of it (parse). Spoken VSCode Extension: is a Visual Studio Code extension able to receive commands to manipulate ...

  8. Programming by Voice for Visual Studio Code

    The server is necessary, because it does all speech recognition job. Currently we support only Windows. 💾 Download server for Windows (it requires .NET5) Add voice-assistant.json file to root directory of your project and click "Reload definition". You may use this example file or click "Add example voice-assistant.json" button. That's it! 🎤

  9. Speech to text

    The Audio API provides two speech to text endpoints, transcriptions and translations, based on our state-of-the-art open source large-v2 Whisper model.They can be used to: Transcribe audio into whatever language the audio is in. Translate and transcribe the audio into english.

  10. Voice Assistant for VSCode

    The server is necessary, because it does all speech recognition job. Currently we support only Windows. 💾 Download .NET Server (only for Windows, it requires .NET5) Add voice-assistant.json file to root directory of your project and click "Reload definition". You may use an example file or click "Add example voice-assistant.json" button.

  11. How do I turn on text wrapping by default in VS Code

    Firstly go to the setting -> the search on search box word wrap -> then on the word wrap option. Go to File > Preferences > Settings OR use shortcut key ctrl + , Now type keyword word wrap and do ON/OFF as per your preference. Press ctrl + , from the keyboard. Click on Editor: Word Wrap drop down. Select on.

  12. GitHub Next

    Write/edit code. Just state your intent in natural language and let Copilot Voice do the heavy lifting of suggesting a code snippet. And if you don't like what was generated, ask for a change in plain English. Go to the next method.

  13. SpeechRecognition · PyPI

    IBM Speech to Text; Snowboy Hotword Detection (works offline) Tensorflow; Vosk API (works offline) OpenAI whisper (works offline) Whisper API; Quickstart: pip install SpeechRecognition. See the "Installing" section for more details. To quickly try it out, run python -m speech_recognition after installing. Project links:

  14. GitHub

    Name of the voice used to read back text. If null, this defaults to the current system voice. For a list of available voices, on a Mac either: Open System Preferences. Go to Dictation & Speech. Open the Text to Speech tab. Browse the system voice selections. or, in a command line run:

  15. vscode-speech-to-text README

    \n \n; cmd/ctrl+shift+p to open the command palette. \n; Run the Speech to Text: Dictate command. This will start a web server at localhost:9000 and a WebSocket server at localhost:9001. \n; localhost:9000 will automatically open in your default browser. If it doesn't, open it manually in Chrome or another browser that supports the Web Speech API.This will connect to the WebSocket server and ...

  16. English (United Kingdom) language support for VS Code Speech

    English (United Kingdom) language support for speech-to-text and other voice capabilities in VS Code. Installation. Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter. Copy. Copied to clipboard. More Info. Overview Version History Q & A Rating & Review.

  17. Speech to Text Online Made Easy

    The speech-to-text converter built into the CapCut online video editor is free to use and accessible from within the program. Google's speech-to-text API, Otter.ai, and Temi are just a few of the many free online speech-to-text converters. However, in terms of comprehensive performance, CapCut will be the first choice to meet all your editing ...

  18. I have installed Vs Code speech from microsoft in vscode, and ...

    2024-02-10 15:06:10.782 [trace] [vscode-speech-1] stopped speech-to-text session 2024-02-10 15:06:10.787 [trace] [vscode-speech-1] disposing speech-to-text session ... Can you stop VS Code, set the environment variable VSCODE_SPEECH_LOGS_PATH pointing to a folder that exists and then try again and share the logs.