Alexa-ChatGPT

This repository contains the Alexa skill to use the OpenAI API

Go Report Card codecov

Logic

  • The Alexa skill lambda will send request to lambda that processes chatGPT prompts via SQS, the Alexa skill lambda will then poll the response SQS and return the response if a message is avaliable in <7 seconds.

Due to the Alexa skill response constraint of 8 seconds following logic has been applied

  • if the Alexa skill doesn't receive a SQS message when polling the response SQS within ~7 seconds, it will return 'your response will be available momentarily', to not end the Alexa skill session.
  • Querying the Alexa skill with 'Last Response', the Alexa Skill lambda will immediately poll the response SQS to retrieve the delayed response and output the prompt with the timestamp of response time

Infrastructure

Examples

SETUP

How to configure your Alexa Skill

Environment

we use handler env var to name the go binary either 'main' or 'bootstrap' for AL2.Provided purposes, devs should use 'main'

  export HANDLER=main

Prerequisites

AWS CLI Configuration

Make sure you configure the AWS CLI

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region 'us-east-1' shell aws configure

Requirements

  • OPENAI API KEY

    • please set environment variables for your OPENAI API key > export API_KEY=123456
  • Create a S3 Bucket on your AWS Account

    • Set envrionment variable of the S3 Bucket name you have created [this is where AWS SAM] > export S3_BUCKET_NAME=bucket_name

Deployment Steps

  1. Create a new Alexa skill with a name of your choice

  2. Set the Alexa skill invocation with a phrase i.e. 'My question'

  3. Set built-in invent invocations to their relevant phrases i.e. 'help', 'stop', 'cancel', etc.

  4. Create a new Intent named 'AutoCompleteIntent'

  5. Add a new Alexa slot to this Intent and name it 'prompt' with type AMAZON.SearchQuery'

  6. Add invocation phrase for the 'AutoCompleteIntent' with value 'question {prompt}'

  7. Deploy the stack to your AWS account.

    sam build && sam deploy --stack-name chat-gpt --s3-bucket $S3_BUCKET_NAME --parameter-overrides 'ApiKey=$API_KEY' --capabilities CAPABILITY_IAM

  8. Once the stack has deployed, make note of lambda ARN from the 'ChatGPTLambdaArn' field, from the the output of

    sam list stack-outputs --stack-name chat-gpt

  9. Apply this lambda ARN to your 'Default Endpoint' configuration within your Alexa skill, i.e. 'arn:aws:lambda:us-east-1:123456789:function:chatGPT'

  10. Begin testing your Alexa skill by querying for 'My question' or your chosen invocation phrase, Alexa should respond with "Hi, let's begin our conversation!"

  11. Query Alexa 'question {your sentence here}'

    Note the OpenAI API may take longer than 8 seconds to respond, in this scenario Alexa will tell you your answer will be ready momentarily, simply then ask Alexa 'last response'

  12. Tell Alexa to 'stop'

  13. Testing complete!

Contributors

This project exists thanks to all the people who contribute.

Donations

All donations are appreciated!

Donate

Built With

Share this project:

Updates