🔸 Combo: AWS WebApp, API Explorer, GPT Exploration

Code Explanation Engine: Integrating OpenAI with Serverless AWS Architecture and React

Objective

This project’s primary objective is to create an AI-powered web application, the “Code Explanation Engine”. The application will allow users to input any piece of code, which will be interpreted and explained by leveraging the OpenAI GPT API. The key focus is not on developing the underlying machine learning capabilities but on effectively integrating this powerful API.

The project provides learners with an in-depth experience of working with third-party APIs, specifically the OpenAI GPT API, while also offering exposure to scripting for API integration. Additionally, it affords a hands-on learning experience in deploying a serverless architecture on Amazon Web Services (AWS), crucial for hosting web applications in a modern digital environment.

Another focus of the project is to develop a user-friendly front-end interface using React, emphasizing creating intuitive user experiences. This project serves as a practical exercise in advanced web application design and deployment, API utilization, cloud hosting solutions, and the integration of artificial intelligence in a user-facing application.


Learning Outcomes

  • Understand how to set up and configure AWS services, including API Gateway and Lambda functions.
  • Learn how to integrate a third-party API, such as the GPT API, into an application.
  • Gain knowledge of serverless architecture and its advantages for scalable applications.
  • Develop skills in handling API requests and responses, including data formatting and error handling.
  • Enhance front-end development skills for creating an intuitive user interface.



:bulb: Should you find the steps somewhat daunting, we recommend mirroring the steps presented in the mentor’s demo (recording below) prior to trying your hand at the code independently.

:bulb: Try each step with the help of the resources listed. If you are stuck, take a look at the example code snippets.



General Understanding and Conceptualization

  1. AWS Documentation: The official documentation for AWS is a great resource to understand the various services and how to use them.
  2. OpenAI Documentation: This documentation explains how you can use the GPT-3 API.
  3. Serverless Architecture: This article by Martin Fowler provides a deep dive into the concept of serverless architecture.

Step 1: Set up AWS Account and Services

Create Free Tier Account

To set up your AWS Account and services, first, you need to create an AWS account. Go to the AWS website and follow the registration process. After this, log into your account and navigate to the AWS Management Console.

Create API

Search for “API Gateway” in the list of services, click on it, and then create a new Rest API.

REST, or Representational State Transfer, is an architectural style used for web development. APIs that adhere to the principles of REST are referred to as RESTful APIs and can be interacted with using standard HTTP methods, such as GET, POST, PUT, DELETE, etc.

The REST API will act as an interface between your frontend application and the backend Lambda function. You’ll be able to define multiple endpoints and methods on this API, which will be used to trigger specific actions in your Lambda function. We will create a POST method in a later step and integrate it to our lambda function.

As you get more comfortable with AWS, you might explore other types of APIs, such as WebSocket APIs for real-time two-way communication, or HTTP APIs as a simpler, cheaper, and more performance-optimized version of REST APIs, though they lack some of the customization options available with REST APIs. But for now, a REST API is the perfect choice for this project.

Create AWS Lambda

Next, you’ll need to set up an AWS Lambda function. Go back to the services list and search for “Lambda”. Click on the “Create Function” button, select the “Author from scratch” option, and fill in details like function name and runtime. You can choose Node.js or Python for the runtime depending on your preference.

Install and Configure AWS CLI

To streamline your interactions with AWS, you may set up the AWS Command Line Interface (AWS CLI). This will allow you to manage your AWS services from the command line, making it easier to script and automate tasks. After setting it up, remember to configure it with your AWS credentials.

Setting Up AWS Account and Services:

  1. Creating an AWS Account
  2. API Gateway Getting Started
  3. AWS Lambda Getting Started
  4. Installing the AWS CLI
  5. Configuring the AWS CLI

Step 2: Implement Serverless Backend

Setup AWS Lambda

Set up the structure for your AWS Lambda function. This function will process incoming requests from the front-end interface. The initial code sets up the function, parses the user’s input from the request and adds error handling.

When creating an AWS Lambda function, it’s important to assign it a role with the appropriate permissions so that it can access the resources it needs. In the case of this project, the Lambda function primarily needs to make HTTP requests to the OpenAI API, which doesn’t require any special AWS permissions. However, it’s good practice to set this up correctly from the start. Guidance on selecting a role with appropriate permissions for the Lambda function can be found in the Appendix.

In the Configuration panel, select the Environment variables section and click on Edit. Add a new environment variable by providing a Key (e.g., “OPENAI_API_KEY”) and a Value (your OpenAI API key), then save your changes.

Once this is done, you can access the stored API key in your code using the environment variable name you defined. Example code inNode.js env:
let openai_api_key = process.env.OPENAI_API_KEY

Implementing Serverless Backend:

  1. Building Lambda Functions with Node.js
  2. Building Lambda Functions with Python
  3. Using API Gateway with AWS Lambda
Example Code

JavaScript (Node.js)

exports.handler = async (event, context) => {
    const body = JSON.parse(event.body);
    const userInput = body.userInput;

    try {
        // TODO: Call GPT API and return response

    } catch (error) {
        console.error('Error processing request:', error);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: 'Error processing request' })
        };
    }
};

Python

import json

def lambda_handler(event, context):
    body = json.loads(event['body'])
    user_input = body['userInput']

    try:
        # TODO: Call GPT API and return response

    except Exception as error:
        print(f'Error processing request: {error}')
        return {
            'statusCode': 500,
            'body': json.dumps({ 'message': 'Error processing request' })
        }

Both versions expect the incoming event to be a dictionary with a ‘body’ key that can be parsed into another dictionary. The ‘userInput’ is then extracted from the parsed body. Both functions return a response dictionary with ‘statusCode’ and ‘body’ keys, in case of an error.

Remember, you will need to replace the // TODO: Call GPT API and return response comment with actual API call and ensure you return a response as per the API’s requirement. The response should include ‘statusCode’ and ‘body’ keys similar to the error return statement.

Remember that these snippets serve as an initial framework, and you should expand upon it. You might want to handle different types of requests, create more complex responses, or experiment with the parameters in the GPT API call. The end goal is to gain a strong understanding of how serverless backends function and interact with other components of a web application.

Route incoming POST requests to your Lambda function

Next, you’ll configure the API Gateway to route incoming POST requests to your Lambda function. You do this by creating a new POST method on your API, linking it to your Lambda function, and deploying the API.

  • Open your API in the AWS API Gateway console.
  • Select the appropriate resource and choose “Create Method” under “Actions”.
  • Choose “POST” and link it to your Lambda function.
  • Enable CORS (Cross-Origin Resource Sharing) under “Actions” to allow your front-end application to make requests to the API.

Step 3: Generate Response Using GPT API

At this point, you’ll want to call the GPT API with the user’s input and process the response. Remember that you’ll need your GPT API key to authenticate with the API.

The link provided below to OpenAI API Examples is a rich resource full of various code snippets. Although our focus is on explaining code, we encourage you not to limit your exploration. If another application catches your eye and sparks your interest, feel free to experiment with it. Just ensure that you make any required changes to your frontend code.

GPT API Integration:

  1. OpenAI API Examples: The API code can be viewed in Python, node.js, curl and json
  2. GPT-3 Playground: This tool lets you experiment with different prompts and parameters to see how the GPT-3 API responds.
  3. Introduction to environment variables: Learn to securely manage and store your API keys and other credentials using environment variables.

Note: Before you start working with the OpenAI API in Node.js, you first need to install the package using npm or yarn. npm install openai or yarn add openai

Example Code
// Import required AWS SDK clients and commands for Node.js.
const AWS = require('aws-sdk');
// Import OpenAI's GPT-3 package
const { OpenAIAPI } = require('openai');

// Set your API key as an environment variable in the AWS console.
const openai = new OpenAIAPI({ apiKey: process.env.OPENAI_API_KEY });

exports.handler = async (event, context) => {
    try {
        const body = JSON.parse(event.body);
        const userInput = body.userInput;

        const gpt3Response = await openai.complete({
            engine: 'text-davinci-002',
            prompt: userInput,
            maxTokens: 60,
        });

        const result = {
            statusCode: 200,
            body: JSON.stringify({
                message: gpt3Response.data.choices[0].text.trim(),
            }),
        };

        return result;
    } catch (error) {
        console.error('Error processing request:', error);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: 'Error processing request' }),
        };
    }
};


Step 4: Build Front-End Interface

For the front-end of your application, you can use any front-end library or framework that you’re comfortable with. We will use React in the example code snippets. Check out the starter project ReactElement for a React ramp up.

Capture User Input and Send Requests

For the front-end of your application, you will create a text input field where users can submit the code they want explained. You’ll need to add logic to the form submission handler to send an HTTP request to your API Gateway. You can use the built-in fetch function in JavaScript to do this. The request should be a POST request, and the user’s input should be included in the body of the request.

Format and Display Generated Response

Finally, you’ll add logic to process the response from your API Gateway, extract the generated explanation, and display it to the user. You’ll need to use React’s state to store the generated text and display it. Also, it’s a good idea to handle any potential errors that may occur during this process.

Building the Front-End Interface:

  1. React Documentation: The official documentation for React.js, a popular JavaScript library for building user interfaces.
  2. Bootstrap Documentation: If you want to style your interface quickly, you can use the Bootstrap library.
  3. React Testing Library: As you develop your React interface, you should also consider setting up tests. The React Testing Library is a very popular choice for this.
  4. Fetch API: The Fetch API provides a JavaScript interface for accessing and manipulating parts of the HTTP pipeline, such as requests and responses.
  5. Axios: Promise based HTTP client for the browser and Node.js. The Axios library can be used as an alternative to Fetch and is preferred by some developers.
Example Code
import React, { useState } from 'react';

const CodeExplainer = () => {
  const [input, setInput] = useState('');
  const [explanation, setExplanation] = useState('');

  const handleFormSubmit = (e) => {
    e.preventDefault();

    fetch('https://<your-api-gateway-url>', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({ text: input }),
    })
      .then((response) => response.json())
      .then((data) => {
        setExplanation(data.generatedText);
      })
      .catch((error) => {
        console.error(error);
        setExplanation('An error occurred.');
      });
  };

  return (
    <div>
      <form onSubmit={handleFormSubmit}>
        <textarea
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Enter your code here"
        ></textarea>
        <button type="submit">Explain Code</button>
      </form>
      {explanation && <p>{explanation}</p>}
    </div>
  );
};

export default CodeExplainer;


As you work on developing the frontend and backend parts of a full-stack application, it’s critical that both sides adhere to a common structure when it comes to sending and receiving data. In the example codes shared, there’s a mismatch between the data being sent by the frontend (React code) and the data being received by the backend (AWS Lambda function).

Task: Your task is to identify this mismatch, figure out which parts of the code need to be adjusted, and implement the necessary changes to ensure the frontend and backend can communicate correctly.

Hint

The mismatch occurs in two places:

  1. In the object sent in the body of the POST request from the frontend to the backend. The name of the property that holds the user’s input in this object does not match the name of the property that the backend expects to find in the event body.
  2. The response from the backend contains a key that the frontend code isn’t looking for when it receives the data.

Step 5: Test and Debug

At this point, you should have a working application that takes user input, sends it to the GPT API, and displays the generated text. However, it’s crucial to thoroughly test your application to ensure that everything is working as expected.

Test your application with various types of user input to make sure that the GPT API can handle them. Also, check the error handling in your Lambda function and your front-end application. Look at the logs in your Lambda function if you need to debug any issues.

Testing and Debugging:

  1. Jest: JavaScript Testing Framework
  2. React Testing Library: Useful for testing React components

Step 6: Deploy the Application

Once you’ve tested your application and are happy with it, you can deploy it.

If you’re using React, you can build your application using the npm run build command. This will create a build folder in your project directory that contains all the static files you need to deploy.

You can then use a service like AWS S3 to host these static files. Once your files are uploaded, you can configure your S3 bucket for static website hosting, and your application will be live.

After deploying, remember to monitor your application for any issues and to make updates or enhancements as necessary. Good luck with your AI-powered text generation application!

Deployment:

  1. Deploying a React App to S3: This guide explains how to deploy your React application to AWS S3.
  2. Hosting a static website on Amazon S3

Evaluation

Self-Evaluation Criteria

For your self-assessment, please consider the following points:

  • Functionality and Accuracy: Use a variety of inputs to test the application and ensure that it behaves as expected.
  • Error Handling: How well does your application handle errors? Good error handling includes providing useful feedback to the user, logging errors for debugging purposes, and gracefully handling unexpected inputs or responses.
  • Code Quality: Is your code well-structured and easy to understand? High-quality code is modular, uses clear variable and function names, includes comments where necessary, and follows standard conventions for the chosen language.
  • User Interface: Is the front-end user-friendly and intuitive? A good user interface should be easy to use, have clear instructions, and provide feedback to the user (like loading indicators or success/error messages).
  • Documentation: Have you documented your project well? This includes comments in the code, a README file explaining how to run the project, and any necessary documentation for end-users.

Resources

Resources for this project are embedded in the tasks section.


Appendix

Selecting a Role

Here’s a step-by-step guide to creating a new role for your Lambda function:

  1. From your AWS Management Console, go to the IAM (Identity and Access Management) service.
  2. In the IAM console, click on “Roles” in the left-hand menu, then click on “Create role”.
  3. Select “Lambda” as the service that will use this role, then click on “Next: Permissions”.
  4. In the search box, type “AWSLambdaBasicExecutionRole”. This policy grants the permissions that your function needs to write logs to CloudWatch Logs, which is a common requirement for Lambda functions. Select the checkbox next to it, then click on “Next: Tags”.
  5. (Optional) Add any tags that you want, then click on “Next: Review”.
  6. Give your role a name and description, then click on “Create role”.

Now, when you create your Lambda function:

  1. After choosing your function name and runtime, in the “Permissions” section, select “Use an existing role”.
  2. In the “Existing role” dropdown, select the role that you just created.

Your Lambda function now has the permissions it needs to write logs to CloudWatch Logs.

Remember that if your Lambda function needs to access other AWS resources (e.g., reading from an S3 bucket, writing to a DynamoDB table), you will need to attach additional policies to your role that grant these permissions. Always follow the principle of least privilege and only grant the permissions that are absolutely necessary.

Recording of AWS Mentor Webinar (06/21)