Warning
π¨ You are looking at a development work in progress branch. If you want to let me know your feedback or report issues, click here to open the related issue.
π§ Also, this documentation is not complete yet so please report on your issues or feedback.
Unofficial TypeScript Node.js client for Character AI API, an awesome website which brings characters to life with AI!
π Table of contents:
- π Intro
- π‘ Features
- π¨ Installation
- π Using an Access Token
- π Finding your character's ID
- π Finding your conversation's ID
- π£οΈ Talking to characters
- π€ Getting voice messages
- π Calling characters
- πΌοΈ Manipulating images
- π€ Personas
- π§ Character management
- π₯ Group chats
- β Troubleshooting
- π Support
- π Disclaimer
node_characterai is an unopiniated TypeScript client and wrapper for Character.AI. It allows you to use the website and app with a developer friendly programmable interface.
Preface:
This package is a completely reworked and revamped version of the 1.x version series, that used (now considered old) endpoints and were actively worked and maintained on from 2023 to 2024 until the CharacterAI team has completely deprecated the endpoints and websites on September 10th, 2024.
The old version has stayed a long time and still currently works partly to this day but is at risk of deprecation and uses old features that are not available anymore and or does not fully utilize all the new features the new endpoint and website has to offer and utilized features like Puppeteer to get around some restrictions.
This new version has been rewritten in mind to be as developer-friendly as possible, and with type safety in mind; a long requested feature.
Old intro: (1.x)
This repository is inspired by RichardDorian's unofficial node API. Though, I found it hard to use and it was not really stable and archived. So I remade it in javascript.
This project is not affiliated with Character AI in any way! It is a community project. The purpose of this project is to bring and build projects powered by Character AI.
If you like this project, please check their website.
- β Fully written in TypeScript with type safety in mind
- β Fully asynchronous requests
- π§Έ Easy and developer-friendly to use
- π¨ DM characters, fetch information, create, edit, delete conversations
- π Huge list of features you can use and interact with like on the app or website
- π Call characters and utilize the TTS and SST features
- π£οΈ Create, edit voices or characters with a few lines of code
- π Be able to switch message candidates, submit annotations and manipulate messages
- πΌοΈ Built in image manipulation
- π₯
Group chat support(Soon) - π Active development
npm install node_characterai@betaNote: This is temporary until the stable version is out.
Tip
Type safety bindings is bundled with the package, so you do not need to worry about installed @types.
Typescript (Recommended):
import { CharacterAI } from "node_characterai";Javascript (CommonJS):
const { CharacterAI } = require("node_characterai");Warning
Anyone with your session token could have access to your account without your consent. Do this at your own risk and responsability.
You have two ways of getting your access token. One via network inspection and the other by local storage.
To get it, you can open your browser, go to the Character.AI website in localStorage.
- Open the Character.AI website in your browser on the front page.
- Open any conversation with any character. (Important)
- Open the developer tools (F12, Ctrl+Shift+I, or Cmd+J).
- Go to the
Applicationtab. - Go to the
Storagesection and click onLocal Storage. - Look for the
HTTP_AUTHORIZATIONkey. - Open the object, right click on value and copy your access token.
Tip
Sometimes the HTTP_AUTHORIZATION key doesn't show up directly. You will need to be in a conversation with a chatbot first. Try refreshing the page until you see it.
Authentication refers to logging in to an account to use the client. Back in beta, Character.AI had a guest login feature, which was deprecated in favor of using accounts. F.
Basic authentication usage:
const characterAI = new CharacterAI();
characterAI.authenticate("INSERT ACCESS TOKEN RIGHT HERE").then(async() => {
console.log("Logged in");
// start coding in here!
});Tip
Please avoid putting your access token in your code. You are unintentionally giving access to your account if you share code with your access token in it, and if you ever publish this to something open like GitHub, the token will still be found in the commit, and there is a good chance GitHub will index it too.
Instead, use something like process.env. and .env files. Click here to see a comprehensive tutorial and documentaton.
You can find your character ID in the URL of a conversation.
You can either fetch the conversations with code when you fetch your character with code:
// gets the latest dms with a character.
// you can also customize the amount of preview messages to fetch.
const dms = await character.getDMs();Tip
When you open a conversation using DM(), if you specify no specific chatId, the latest conversation with the character will be fetched, or a new dm will be created.
Or, if you want to fetch a previous conversation you had on your browser or phone, open the conversation history (History > Click on the Conversation) and look at the URL.
What comes after hist= is the external/conversation ID.

// this part is not finished //
Talking to characters is an integral part of character.ai.
Chatting with node_characterai is pretty straightforward and is inspired by how you would actually do it on the app or the website.
// get your character
const character = await characterAI.fetchCharacter(characterId);
// dm it
// use `await character.DM(chatId);` instead if you got a specific conversation in mind you wish to use.
const dm = await character.DM();
// send it a message
const message = await dm.sendMessage("test");
// get the text content
const content = message.content;
console.log(content);Candidates represent the sub messages created by a reply from your character. It allows you regenerate, pick, choose and review which candidate of a messaage you like best. The final candidate is called the primary candidate.
Message -> Candidate
Tip
In order to avoid confusion, by default, message.content returns the primary candidate's content.
If you are not satisfied with the message and you want to regenerate it (similar to how you do it on the website), you can use message.regenerate().
// creates and generates a new candidate
await message.regenerate();In order to manage your candidates in a message, here's how you can do it:
// gets all the currently stored candidates
// since they are cached locally, if you need to refetch them
// to be up to date, use `conversation.refreshMessages()`
message.getCandidates();
// this gets the current and primary candidate
message.primaryCandidate
// this switches to the next available candidate (does not generate a new one!)
await message.switchToNextCandidate();
// this switches to the previously available candidate (does not generate a new one!)
await message.switchToPreviousCandidate();
// this switches the primary candidate to a specific candidate id
await message.switchPrimaryCandidateTo(candidateId);Now, from there, you can choose to evaluate the candidate (the 4 stars option on the app) or get the TTS file.
// example candidate
const candidate = message.primaryCandidate;
// rate the candidate 4 stars
await candidate.createAnnotation(AnnotationStars.Four);
// remove the stars
await candidate.createAnnotation(AnnotationStars.Remove);
// evaluate the candidate (example value)
await candidate.createAnnotation(AnnotationValue.Boring); Getting voice messages, or Text-To-Speech (TTS), (a.k.a. the character talking) can be fetched. On the website and app, you can fetch the voice by clicking here:
The following methods give you a link to an mp3 file URL that you can use to do whatever you want with.
Note: You will have to manually download the file if needed, you are only supplied with an URL.
To do the following with the package, you do:
// with a specific voice id
await message.getTTSUrl(voiceId);
// or with a query/character name/voice name.
await message.getTTSUrlWithQuery("voice name");Warning
This feature is currently broken due to some dependency issues. I am working to get them solved, please do not use it yet. Click here to get more details.
Calling characters is a pretty awesome feature Character.AI incorporates. Luckily, node_characterai also has support for calling characters at your ease with flexibility in mind.
Voices are an essential part in the calling feature. By default, if you have set a default voice on a character via the app or website, the package will try to automatically set that. But when calling, you can choose to specify a specific query or voice.
Here is some basic code to do that:
// fetch your voices
const myVoices = await characterAI.fetchMyVoices();
// or search for one
const potentialVoices = await characterAI.searchCharacterVoices("Character");
// or even get the system ones
const systemVoices = await characterAI.fetchSystemVoices();
// or fetch a specific one with its id
const specificVoice = await characterAI.fetchVoice(voiceId);
// or, set a voice override for a specific character
await character.setVoiceOverride(voice.id);Once you have settled on the voice you wish to use, scroll down to learn how to initiate a call, and set the voice id, or search for a specific query.
Like we've seen before, we have learned to fetch and look for voices, but the package allows you to also edit and create voices, like you would on the app or website.
Creating a voice usage using an .mp3 file:
const fileContent = fs.readFileSync('./voice.mp3');
const blob = new File([fileContent], 'input.mp3', { type: 'audio/mpeg' });
const voice = await characterAI.myProfile.createVoice("voice name", "description", true, "this is a preview text", blob, VoiceGender.Neutral);Editing voices usage:
// edit the voice (see options)
await voice.edit();
// delete the voice
await voice.delete();Using the audio interface (AudioInterface class), you can get the microphone/input devices and speaker/output devices information according to your platform and change the sox path.
Basic usage:
// get all audio devices
const allDevices = AudioInterface.getAllDevices();
// get microphones or speakers
const microphones = AudioInterface.getMicrophones();
const speakers = AudioInterface.getSpeakers();
// find microphones by their ID or name
const microphoneById = AudioInterface.getMicrophoneFromId(2);
const microphoneByName = AudioInterface.getMicrophoneFromName('USB Microphone');
// same for speakers
const speakerById = AudioInterface.getSpeakerFromId(1);
const speakerByName = AudioInterface.getSpeakerFromName('Bluetooth Speakers');
// and optionally, if you need, you can set a custom sox path.
// is `null` by default.
AudioInterface.soxPath = '/custom/path/to/sox';Most of the audio features are handled natively using naudiodon, a node implementation of PortAudio. I highly recommend you check their package out. Some functions like playFile(), use Sound eXchange (or sox for short) to handle playback, which should work on most platforms.
Here are the instructions to installing them depending on your platform:
-
Download SoX:
- Visit the official SoX SourceForge page: http://sox.sourceforge.net/
- Download the latest Windows binary release
-
Installation:
- Extract the downloaded ZIP file
- Add the sox folder to your PATH, or in your project folder.
Tip
If you do not know how to add something to your PATH and you do not wish your project directory, this YouTube tutorial shows how to do it easily. Put the path as where you installed sox.
- Verify Installation:
sox --version
Note: 32-bit and 64-bit x86 architectures are supported. Windows ARM is not currently supported, but potential alternatives like WSL or ffmpeg work.
-
Using Homebrew (Recommended):
brew install sox
-
Using MacPorts:
sudo port install sox
-
Verify Installation:
sox --version
-
Ubuntu/Debian:
sudo apt-get update sudo apt-get install sox libsox-fmt-all
-
Fedora:
sudo dnf install sox sox-plugins-freeworld
-
Arch Linux:
sudo pacman -S sox
-
Verify Installation:
sox --version
In the following example, we will call a character using our microphone device as input, and speakers as output.
Warning
You can only call 1 character on the same account at a time. Trying to call somewhere else while this is running will cause the call to interrupt and disconnect.
// create a dm if you have not already
const dm = await character.DM(characterId);
// create a call session
const call = await dm.call({
// protip: use 'default' to get the default microphone. or else, you can look above to get the device id or name of your choice.
microphoneDevice: 'default',
speakerDevice: 'default',
// you can use voiceId to specify a specific voice
// voiceId: "id"
// or use a specific query
// voiceQuery: "voice name"
});You can also use voiceId to specify a specific voice, or use voiceQuery to automatically look for one if none can be found. Note that you cannot use both arguments at the same time.
This alone should be enough to let you talk to the character without any extra setup.
The call allows you to subscribe to a few events in order to track what you are saying when the character starts and stops talking.
call.on('userSpeechProgress', candidate => console.log("User speech candidate:", candidate));
call.on('userSpeechEnded', candidate => console.log("User ended speech candidate:", candidate));
call.on('characterBeganSpeaking', () => console.log("Character started speaking"));
call.on('characterEndedSpeaking', () => console.log("Character ended speaking"));There are also other misc. calling features that allow you to spice up and have more control over the conversation. Mainly:
// interrupting a character when they're speaking
if (call.canInterruptCharacter)
await call.interruptCharacter();
// muting yourself
call.mute = true;
// knowing if the character is speaking
if (call.isCharacterSpeaking)
console.log("The character is currently speaking");
// hanging up after you're done
await call.hangUp();Warning
Leaving no data or input for a prolonged amount of time will result in the character disconnecting from the call. Make sure to use empty data if required to avoid the call from hanging up.
Previously, we have seen how to use a basic barebones and batteries-included way of calling characters with the package, but if we want to have more flexibility with our own voice input and voice output, here is how to proceed.
A set of streams (inputStream and outputStream) included in the call class allow you to send information or receive.
Important
The recommended input format AND the output format is 16-bit Mono PCM at 48000Hz.
The inputStream awaits for any real time audio that will be encoded as livekit frames. This means YOU need to stream your own PCM data according to the format and pipe it through the stream.
Alternatively, playFile() allows you to play a file and stream it with sox.
The outputStream gets to the same format raw PCM data out. No extra decoding is required, but if you wish to save or modify the data, you will have to use a tool like ffmpeg or sox.
Since 1.x, manipulating images became much more straightforward and easier.
Images are now stored as instances called CAIImage. They store a local instance of the image and allow you to manipulate the image, generate a new image, and upload the image to CharacterAI's servers.
They work on a STORED cache basis, meaning that the changes are stored locally until published.
Example of basic usage with editing the avatar
// getting the avatar CAIImage instance
const avatar = characterAI.myProfile.avatar;
// printing the fullURL to CAI's servers
console.log(avatar.fullUrl);
// changing the file to a local one for example
avatar.changeToFilePath("./avatar.png");
// or better yet, change to a prompt
await avatar.changeToPrompt("Cool cat with sunglasses");
// there are more, like for example:
// changeToBlobOrFile() - changes to a File instance or Blob instance
// changeToUrl() - downloads an image and changes to it
// changeToBuffer() - changes to a buffer
// etc...
// use this if you want to restore the image from the servers
await image.reloadImage();The images, like said before are stored locally and store a Sharp image instance for you to have fun and edit the actual image, or save it in your hard drive, and more.
const image = await avatar.getSharpImage();
// example of modifications, and making it a png file to save it to the hard drive.
image.rotate(90).png().toFile('./image.png');To read more about Sharp and its documentation, click here.
Like I said previously, the changes ARE NOT automatically published and saved. They're cached in the memory until you decide to upload and publish your changes.
Uploading will upload the image to the CharacterAI servers while Publishing will apply the image to the avatar you're editing for example.
This can be useful if you mess up the image for example and you wish to have full control of the pipeline.
// UPLOAD changes first
await avatar.uploadChanges();
// APPLY the changes to the profile.
await characterAI.myProfile.edit({ editAvatar: true });The edit() publishing logic applies to other classes where images are involved like the Character class.
Personas are a unique and cool way to change how you will interact with the character. Character AI provides you on the website and app a way to change, edit your personas, and set the default one. The package allows you to do that too.
Creating or fetching a persona usage:
// creating one
const persona = characterAI.myProfile.createPersona("persona name", "definition", true);
// or fetching the ones we made
const myPersonas = await characterAI.myProfile.fetchPersonas();
// or fetching one if you know the id
const persona = await characterAI.myProfile.fetchPersona(personaId);
// or get the default one
const defaultPersona = await characterAI.myProfile.getDefaultPersona();
// or getting the persona override of a character
const characterPersona = await character.getPersonaOverride();Persona management usage:
// makes persona the default one
await persona.makeDefault();
// editing the persona (see options)
await persona.edit();
// removing the persona
await persona.remove();
// setting a character's persona (via instance or id)
await character.setPersonaOverride(persona);
await character.setPersonaOverride(personaId);Like personas or voices, characters can be managed, too.
To find characters on Character.AI, you can search for them using keywords or fetch characters that are featured or recommended for your account.
// search for characters by keyword
const searchResults = await characterAI.searchCharacter("keyword");
// fetch featured characters
const featuredCharacters = await characterAI.getFeaturedCharacters();
// fetch characters recommended for you
const recommendedCharacters = await characterAI.getRecommendedCharactersForYou();
// fetch characters similar to another character
const similarCharacters = await characterAI.getSimilarCharactersTo(characterId);To explore characters created by a specific user, use their username to fetch their profile and retrieve their characters.
// fetch a user's profile by username
const profile = await characterAI.fetchProfileByUsername("username");
// get the characters created by this user
const userCharacters = profile.characters;Once authenticated, you can create, edit, delete, and manage your own characters easily.
// fetch your own characters
const myCharacters = characterAI.myProfile.characters;
// fetch hidden characters (not publicly visible)
const hiddenCharacters = characterAI.myProfile.hiddenCharacters;
// fetch liked characters
const likedCharacters = await characterAI.getLikedCharacters();You can create characters with detailed customizations, edit their properties, or delete them when no longer needed.
Creating a character:
// create a character
const character = await characterAI.myProfile.createCharacter();
// edit the character info with new information. (see options)
await character.edit();
// or delete it
await character.delete();Warning
You need to own the character to do these actions, and it is irreversible.
You can personalize interactions by assigning a specific persona or voice to a character.
// set a persona or a voice override
await character.setPersonaOverride("personaId");
await character.setVoiceOverride("voiceId");Group chats is a feature that is currently put on hold while I work on it. Please come back later!
| Problem | Solution |
|---|---|
ModuleNotFoundError: No module named 'distutils' |
Run pip install setuptools on a terminal. |
| Other problems? | Please submit an issue if you see a problem thats not listed here. |
If you wish to support the package or the project, you have a few options:
| Option | Description | Link |
|---|---|---|
| β You wish to support me financially | Thank you! You can send me a coffee on ko-fi. | Send me a coffee |
| π‘ You have an idea, or feedback | I'm glad to hear! Feel free to open an issue or reach me privately. | Open a new issue |
| β You want to report a bug, or a problem, or you have a question | Feel free to open an issue or reach me privately. | Open a new issue |
| β¨οΈ You want to contribute with code | Feel free to open a Pull Request! | View PRs |
| π© You want to share a word, or a creation | Feel free to contact me anywhere! I'm looking forward to see what you can create with the package. | π¨ Send me an e-mail |
| π§ You want to support Character.AI | Feel free to go on their awesome website, or subscribe to their C.AI+ subscription. | Website |
π If none of these options matter but you still wish to help, leaving a β star to this package or sharing the package can greatly help!
π This project was able to be maintained because of the incredible work of this community, and I am grateful for everyone that has suppoted, used, promoted and contributed to the package.
β€οΈ This project is updated frequently, always check for the latest version for new features or bug fixes.
π If you have an issue or idea, let me know in the Issues section.
π If you use this package, you also bound to the terms of usage of their website.
β Want to support me? You can send me a coffee on ko.fi: https://ko-fi.com/coloride.





