Towards Data Science – Medium Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals. – Medium

  • An Agentic Approach to Reducing LLM Hallucinations
    by Youness Mansar on December 22, 2024 at 9:31 pm

    Simple techniques to alleviate LLM hallucinations using LangGraphPhoto by Greg Rakozy on UnsplashIf you’ve worked with LLMs, you know they can sometimes hallucinate. This means they generate text that’s either nonsensical or contradicts the input data. It’s a common issue that can hurts the reliability of LLM-powered applications.In this post, we’ll explore a few simple techniques to reduce the likelihood of hallucinations. By following these tips, you can (hopefully) improve the accuracy of your AI applications.There are multiple types of hallucinations:Intrinsic hallucinations: the LLM’s response contradicts the user-provided context. This is when the response is verifiably wrong withing the current context.Extrinsic hallucinations: the LLM’s response cannot be verified using the user-provided context. This is when the response may or may not be wrong but we have no way of confirming that using the current context.Incoherent hallucinations: the LLM’s response does not answer the question or does not make sense. This is when the LLM is unable to follow the instructions.In this post, we will target all the types mentioned above.We will list out a set of tips and tricks that work in different ways in reducing hallucinations.Tip 1: Use GroundingGrounding is using in-domain relevant additional context in the input of the LLM when asking it to do a task. This gives the LLM the information it needs to correctly answer the question and reduces the likelihood of a hallucination. This is one the reason we use Retrieval augmented generation (RAG).For example asking the LLM a math question OR asking it the same question while providing it with relevant sections of a math book will yield different results, with the second option being more likely to be right.Here is an example of such implementation in one of my previous tutorials where I provide document-extracted context when asking a question:Build a Document AI pipeline for ANY type of PDF With GeminiTip 2: Use structured outputsUsing structured outputs means forcing the LLM to output valid JSON or YAML text. This will allow you to reduce the useless ramblings and get “straight-to-the-point” answers about what you need from the LLM. It also will help with the next tips as it makes the LLM responses easier to verify.Here is how you can do this with Gemini’s API:import jsonimport google.generativeai as genaifrom pydantic import BaseModel, Fieldfrom document_ai_agents.schema_utils import prepare_schema_for_geminiclass Answer(BaseModel): answer: str = Field(…, description=”Your Answer.”)model = genai.GenerativeModel(“gemini-1.5-flash-002”)answer_schema = prepare_schema_for_gemini(Answer)question = “List all the reasons why LLM hallucinate”context = ( “LLM hallucination refers to the phenomenon where large language models generate plausible-sounding but” ” factually incorrect or nonsensical information. This can occur due to various factors, including biases” ” in the training data, the inherent limitations of the model’s understanding of the real world, and the ” “model’s tendency to prioritize fluency and coherence over accuracy.”)messages = ( [context] + [ f”Answer this question: {question}”, ] + [ f”Use this schema for your answer: {answer_schema}”, ])response = model.generate_content( messages, generation_config={ “response_mime_type”: “application/json”, “response_schema”: answer_schema, “temperature”: 0.0, },)response = Answer(**json.loads(response.text))print(f”{response.answer=}”)Where “prepare_schema_for_gemini” is a utility function that prepares the schema to match Gemini’s weird requirements. You can find its definition here: code.This code defines a Pydantic schema and sends this schema as part of the query in the field “response_schema”. This forces the LLM to follow this schema in its response and makes it easier to parse its output.Tip 3: Use chain of thoughts and better promptingSometimes, giving the LLM the space to work out its response, before committing to a final answer, can help produce better quality responses. This technique is called Chain-of-thoughts and is widely used as it is effective and very easy to implement.We can also explicitly ask the LLM to answer with “N/A” if it can’t find enough context to produce a quality response. This will give it an easy way out instead of trying to respond to questions it has no answer to.For example, lets look into this simple question and context:ContextThomas Jefferson (April 13 [O.S. April 2], 1743 — July 4, 1826) was an American statesman, planter, diplomat, lawyer, architect, philosopher, and Founding Father who served as the third president of the United States from 1801 to 1809.[6] He was the primary author of the Declaration of Independence. Following the American Revolutionary War and before becoming president in 1801, Jefferson was the nation’s first U.S. secretary of state under George Washington and then the nation’s second vice president under John Adams. Jefferson was a leading proponent of democracy, republicanism, and natural rights, and he produced formative documents and decisions at the state, national, and international levels. (Source: Wikipedia)QuestionWhat year did davis jefferson die?A naive approach yields:Responseanswer=’1826’Which is obviously false as Jefferson Davis is not even mentioned in the context at all. It was Thomas Jefferson that died in 1826.If we change the schema of the response to use chain-of-thoughts to:class AnswerChainOfThoughts(BaseModel): rationale: str = Field( …, description=”Justification of your answer.”, ) answer: str = Field( …, description=”Your Answer. Answer with ‘N/A’ if answer is not found” )We are also adding more details about what we expect as output when the question is not answerable using the context “Answer with ‘N/A’ if answer is not found”With this new approach, we get the following rationale (remember, chain-of-thought):The provided text discusses Thomas Jefferson, not Jefferson Davis. No information about the death of Jefferson Davis is included.And the final answer:answer=’N/A’Great ! But can we use a more general approach to hallucination detection?We can, with Agents!Tip 4: Use an Agentic approachWe will build a simple agent that implements a three-step process:The first step is to include the context and ask the question to the LLM in order to get the first candidate response and the relevant context that it had used for its answer.The second step is to reformulate the question and the first candidate response as a declarative statement.The third step is to ask the LLM to verify whether or not the relevant context entails the candidate response. It is called “Self-verification”: https://arxiv.org/pdf/2212.09561In order to implement this, we define three nodes in LangGraph. The first node will ask the question while including the context, the second node will reformulate it using the LLM and the third node will check the entailment of the statement in relation to the input context.The first node can be defined as follows: def answer_question(self, state: DocumentQAState): logger.info(f”Responding to question ‘{state.question}'”) assert ( state.pages_as_base64_jpeg_images or state.pages_as_text ), “Input text or images” messages = ( [ {“mime_type”: “image/jpeg”, “data”: base64_jpeg} for base64_jpeg in state.pages_as_base64_jpeg_images ] + state.pages_as_text + [ f”Answer this question: {state.question}”, ] + [ f”Use this schema for your answer: {self.answer_cot_schema}”, ] ) response = self.model.generate_content( messages, generation_config={ “response_mime_type”: “application/json”, “response_schema”: self.answer_cot_schema, “temperature”: 0.0, }, ) answer_cot = AnswerChainOfThoughts(**json.loads(response.text)) return {“answer_cot”: answer_cot}And the second one as: def reformulate_answer(self, state: DocumentQAState): logger.info(“Reformulating answer”) if state.answer_cot.answer == “N/A”: return messages = [ { “role”: “user”, “parts”: [ { “text”: “Reformulate this question and its answer as a single assertion.” }, {“text”: f”Question: {state.question}”}, {“text”: f”Answer: {state.answer_cot.answer}”}, ] + [ { “text”: f”Use this schema for your answer: {self.declarative_answer_schema}” } ], } ] response = self.model.generate_content( messages, generation_config={ “response_mime_type”: “application/json”, “response_schema”: self.declarative_answer_schema, “temperature”: 0.0, }, ) answer_reformulation = AnswerReformulation(**json.loads(response.text)) return {“answer_reformulation”: answer_reformulation}The third one as: def verify_answer(self, state: DocumentQAState): logger.info(f”Verifying answer ‘{state.answer_cot.answer}'”) if state.answer_cot.answer == “N/A”: return messages = [ { “role”: “user”, “parts”: [ { “text”: “Analyse the following context and the assertion and decide whether the context ” “entails the assertion or not.” }, {“text”: f”Context: {state.answer_cot.relevant_context}”}, { “text”: f”Assertion: {state.answer_reformulation.declarative_answer}” }, { “text”: f”Use this schema for your answer: {self.verification_cot_schema}. Be Factual.” }, ], } ] response = self.model.generate_content( messages, generation_config={ “response_mime_type”: “application/json”, “response_schema”: self.verification_cot_schema, “temperature”: 0.0, }, ) verification_cot = VerificationChainOfThoughts(**json.loads(response.text)) return {“verification_cot”: verification_cot}Full code in https://github.com/CVxTz/document_ai_agentsNotice how each node uses its own schema for structured output and its own prompt. This is possible due to the flexibility of both Gemini’s API and LangGraph.Lets work through this code using the same example as above ➡️(Note: we are not using chain-of-thought on the first prompt so that the verification gets triggered for our tests.)ContextThomas Jefferson (April 13 [O.S. April 2], 1743 — July 4, 1826) was an American statesman, planter, diplomat, lawyer, architect, philosopher, and Founding Father who served as the third president of the United States from 1801 to 1809.[6] He was the primary author of the Declaration of Independence. Following the American Revolutionary War and before becoming president in 1801, Jefferson was the nation’s first U.S. secretary of state under George Washington and then the nation’s second vice president under John Adams. Jefferson was a leading proponent of democracy, republicanism, and natural rights, and he produced formative documents and decisions at the state, national, and international levels. (Source: Wikipedia)QuestionWhat year did davis jefferson die?First node result (First answer):relevant_context=’Thomas Jefferson (April 13 [O.S. April 2], 1743 — July 4, 1826) was an American statesman, planter, diplomat, lawyer, architect, philosopher, and Founding Father who served as the third president of the United States from 1801 to 1809.’answer=’1826’Second node result (Answer Reformulation):declarative_answer=’Davis Jefferson died in 1826’Third node result (Verification):rationale=’The context states that Thomas Jefferson died in 1826. The assertion states that Davis Jefferson died in 1826. The context does not mention Davis Jefferson, only Thomas Jefferson.’entailment=’No’So the verification step rejected (No entailment between the two) the initial answer. We can now avoid returning a hallucination to the user.Bonus Tip : Use stronger modelsThis tip is not always easy to apply due to budget or latency limitations but you should know that stronger LLMs are less prone to hallucination. So, if possible, go for a more powerful LLM for your most sensitive use cases. You can check a benchmark of hallucinations here: https://github.com/vectara/hallucination-leaderboard. We can see that the top models in this benchmark (least hallucinations) also ranks at the top of conventional NLP leader boards.Source: https://github.com/vectara/hallucination-leaderboard Source License: Apache 2.0ConclusionIn this tutorial, we explored strategies to improve the reliability of LLM outputs by reducing the hallucination rate. The main recommendations include careful formatting and prompting to guide LLM calls and using a workflow based approach where Agents are designed to verify their own answers.This involves multiple steps:Retrieving the exact context elements used by the LLM to generate the answer.Reformulating the answer for easier verification (In declarative form).Instructing the LLM to check for consistency between the context and the reformulated answer.While all these tips can significantly improve accuracy, you should remember that no method is foolproof. There’s always a risk of rejecting valid answers if the LLM is overly conservative during verification or missing real hallucination cases. Therefore, rigorous evaluation of your specific LLM workflows is still essential.Full code in https://github.com/CVxTz/document_ai_agentsAn Agentic Approach to Reducing LLM Hallucinations was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Creating a WhatsApp AI Agent with GPT-4o
    by Lukasz Kowejsza on December 22, 2024 at 5:32 pm

    Created with DALL-EHow to use the Meta API to build your own LLM-powered Whatsapp chatbotA game-changer in the field of AI and business management is the integration of AI agents with widely used communication tools. Think of having a familiar chat interface with real-time data requests, updates, and task automation, all made possible by direct WhatsApp interaction with your business’s management or personal assistant AI.In this third part of our series on creating an AI-powered business manager, I will walk you through the steps of connecting your AI agent to WhatsApp to increase its capabilities and reach. The goal to achieve is an AI Assistant capable of interacting with all your relevant database tables and even creating a table and all necessary tools on its own. As a primary showcase, I focus on a business use case like tracking expenses, invoices, and so on. However you can easily adapt the same logic to create, for example a Personal Assistant that keeps track of your tasks, projects, and ideas.This is the third part of my series. Before we start, for everyone waiting, I apologize for the long delay. I’ve been busy in the last few months starting a new AI Software Engineering job and adapting to the new work-life balance. I have prepared some future parts of this article so far, and we will explore major changes in the agent workflow, along with more sophisticated workflows featuring several additional features. Some workarounds used in the first two articles were necessary for reliable tool calling at that time but are no longer needed due to better-performing models like GPT-4o and GPT-4o-mini. I would still recommend starting with the first two parts if you are new to tool calling and agent workflow development. I find it useful to understand how to build something from scratch before relying on frameworks like LangChain or, more specifically, LangGraph for deeply customizable Agent Workflows (which I will introduce in the near future).For now, we have to step back and focus on the infrastructure first. I think in most projects, especially in AI Software Projects, it is good practice to initially create a working end-to-end product before getting lost in feature creep. I often find myself overthinking initial design choices and developing a too-complex product in my mind. To overcome this, focusing on building a working end-to-end product within a few days of development time really helps to establish a clear foundation. After that, you will know which features to prioritize and will be able to gather initial feedback. This kickstarts an incremental development process, which is always my goal when I commit to a project.Recap of Previous PartsWe established the foundation for our AI-powered business manager in earlier installments of this series:Part 1: The goal was to develop a prototype agent workflow that could interact with tool objects and reduce hallucinations in tool arguments produced by the underlying language model (LLM).Part 2: We concentrated on defining basic features, organizing the project repository, and building a database schema with SQLModel. We also added SQL tools for data addition and querying and updated the Tool class to accommodate SQLModel objects. Furthermore, we presented a TaskAgent class for automating agent startup and set up an OpenAIAgent for context-aware tool usage.Scope for This ArticleAs usual, let us begin by defining the scope of this article:Integrate the AI Agent with WhatsApp Using MetaAPISet up and configure MetaAPI for WhatsApp Business integration.Ensure the AI agent can send and receive messages through WhatsApp.Set Up Webhooks and Run Locally Using NgrokCreate a FastAPI application to handle webhook events from WhatsApp.Use ngrok to expose the local FastAPI server to the internet.Configure MetaAPI to send webhook events to the ngrok URLSince we are moving forward to a deployable server, we also need to adjust our project architecture. We are essentially implementing a FastAPI server, and therefore, my preferred choice of repository structure is Domain-Driven Design (DDD) or rather leaning towards DDD. (You can check the Repo structure here)1. Set up WhatsApp Cloud APIFirst of all, you need to get familiar with the Cloud API provided by Meta. You can achieve the same results using SaaS products like Twilio, which offer a more user-friendly integration. However, due to the recent data breach and for cost-efficiency reasons, I prefer using the root API provided by Meta.1.1 PrerequisitesBefore you begin, you will need to register a Meta developer account by following these steps: How to open a Meta developer account. During this registration process, you will need to verify yourself using a phone number. Note that this will not be the phone number of your final WhatsApp client. Instead, you will get a test phone number assigned by the platform, which can later be changed to another phone number.After registration, go to your dashboard and create an app.Moreover, you will need a Meta Business Account (MBA) that will be associated with your real account, or you can create a new one to link to your MBA. You can also skip this, as you will be automatically prompted to link or create an MBA in the next step.1.2 Add WhatsApp Product to Your AppAfter you have created an app inside your Meta developer account, you will be asked to add products to it. Here you have to choose WhatsApp and follow the setup process. If you haven’t done so, create a Meta Business Account here. Once you are done, you will have a test WhatsApp Business Account and a test phone number.1.3 Add a Recipient NumberIn the left-hand menu of the App Dashboard, navigate to WhatsApp > API SetupThere under Send and receive messages, select the To field and choose Manage phone number list. Here, you can add a phone number that is allowed to send and receive messages from your test phone number. This should ideally be your own phone number as you want to test your application. Before you link this WhatsApp API Account to a real number, you will only be able to add up to 5 recipient numbers.In WhatsApp > API Setup, you can now send a test message by filling in the from field with your test phone number and the to field with your recipient number (your own phone number).Generate an access token. This is your WHATSAPP_API_TOKEN, which we will need later in step 6.Screenshot ngrok (Image by Author)We have successfully set up the Cloud API as required. In the next step we will create a Webhook that will enable communication with our AI Assistant application.To achieve this, we need to create and serve an endpoint in our backend application. This means our Python backend must be accessible through a URL. This URL will act as the Webhook endpoint that the AI Assistant can call to send and receive data.2. Creating a FastAPI EndpointTo be accepted by the Webhook, our root endpoint must verify a specific GET request that will be sent by the webhook when adding our URL. The webhook will send three query parameters:hub.mode, hub.challenge, hub.verify.token.The verification token is defined when creating the webhook in Cloud API. Your backend should verify that this token matches what you have defined and return the hub.challenge object as a response. Make sure to install FastAPI and Uvicorn using pip install fastapi uvicorn first.2.1 Create main.pyCreate a file named main.py with the following content:from fastapi import FastAPI, Query, HTTPExceptionVERIFICATION_TOKEN = “abcdefg12345″app = FastAPI()@app.get(“/”)def verify_whatsapp( hub_mode: str = Query(“subscribe”, description=”The mode of the webhook”, alias=”hub.mode”), hub_challenge: int = Query(…, description=”The challenge to verify the webhook”, alias=”hub.challenge”), hub_verify_token: str = Query(…, description=”The verification token”, alias=”hub.verify_token”),): if hub_mode == “subscribe” and hub_verify_token == VERIFICATION_TOKEN: return hub_challenge raise HTTPException(status_code=403, detail=”Invalid verification token”)@app.get(“/health”)def health(): return {“status”: “healthy”}@app.get(“/readiness”)def readiness(): return {“status”: “ready”}In the third line, you can define a VERIFICATION_TOKEN that is used later by the webhook to verify that the backend is under your control. In this case, we have defined it as “abcdefg12345”, but you can define a custom token of your own.I’ll continue correcting the remaining sections and include the next part shortly!2.2 Run the ApplicationRun the application using Uvicorn:uvicorn main:app –reload2.3 Serve Your API LocallyYour backend now runs locally on http://localhost:8000 and/or http://127.0.0.1:8000.We are now serving the following endpoints:Verify WhatsApp webhook: http://127.0.0.1:8000/?hub.mode=subscribe&hub.challenge=1234&hub.verify_token=abcdefg12345Health endpoint: http://127.0.0.1:8000/healthReadiness endpoint: http://127.0.0.1:8000/readinessYou can use the health endpoint to check if your application is running. Open http://127.0.0.1:8000/health in your browser, and you should see: {“status”: “healthy”}3. Run a Proxy Server with NgrokSince our server is running locally, the WhatsApp Webhook cannot call the endpoint for verification. What we need is a public URL that can be used by the webhook. There are two options: deploy the application to a cloud server or create a proxy server tunnel. Since we are still in the development process, we will use the second option.Go to ngrok Signup and create a free account.Install ngrok locally. Depending on your system, you can use Brew, Chocolatey, or simply download and install it. See: Setup & Installation.After installation, add your authentication code using the following command in your terminal. Replace $YOUR-AUTHENTICATION_TOKEN with your ngrok authentication token, which can be found under “Your Authtoken” in the ngrok dashboard.Begin forwarding traffic from your localhost on port 8000 by running the following command in your terminal:> ngrok config add-authtoken $YOUR-AUTHENTICATION_TOKEN> ngrok http http://localhost:8000Forwarding https://<random-string>.ngrok.io -> http://localhost:8000Your local server is now accessible via public URLs provided by ngrok. You should see something like this:Forwarding https://<random-string>.ngrok.io -> http://localhost:8000Use the HTTPS URL provided by ngrok for the webhook configuration.4. Implementing the WebhookNow let us return to Meta’s Cloud API to implement the desired webhook.Navigate to Meta for Developers and select the app created before.In the left-hand menu go to WhatsApp > Configuration.In the Webhook section paste your ngrok HTTPS forwarding URL into the Callback URL field and enter the VERIFICATION_TOKEN defined in main.py into the Verification Token field.Click the confirm and save button and wait for the webhook to verify your backend.In the section Webhook Fields enable the messages toggle under Subscribed Fields.That’s it! You should now be able to receive WhatsApp messages in your Python backend server.4.1 Understanding Webhooks and ConversationsWebhooks are HTTP callbacks that enable programs to receive real-time updates when certain events occur such as a new message or a status change. Webhooks make system integrations and automation possible by delivering an HTTP request containing event data to a pre-configured URL (in our case the ngrok proxy server url).To understand the logic and pricing behind webhooks in the Meta cosmos it is helpful to understand some basic principles about conversations.A ‘conversation’ on WhatsApp API starts when:1. The User sends a message: This opens a 24-hour window, during which you can reply with messages including text, images, or other media without additional costs.2. The Business Initiates Contact: If no user message has been received recently (no open 24-hour window), your AI assistant must use a pre-approved template message to start the conversation. You can add custom templates but they need to be approved by Meta.As long as the user keeps replying, the 24-hour window resets with each new message. This makes it possible to have continuous interaction without additional costs. A Conversation costs about 0.00–0.08 USD. The concrete pricing is based on you conversation type Marketing, Utility, Service and your location. FYI: Service Conversations seem to be nowadays for free. You can find the concrete pricing here: Whatsapp Pricing5. Build a Receive Message EndpointNow we are able to receive messages in our backend. Since we have subscribed to message objects, each time a message is sent to your test number, the webhook will create a POST request to the callback URL that you defined in the previous step. What we need to do next is to build an endpoint for POST requests in our FastAPI application.Let us first define the requirements:Return a 200 HTTP Status Code: This is essential to inform CloudAPI that the message has been received successfully. Failing to do so will cause CloudAPI to retry sending the message for up to 7 days.Extract Phone Number and Message: The payload of the incoming request contains data that includes the phone number and the message. Which we need to process in the backend.Filter Incoming Objects: Since CloudAPI might send multiple events for the same message (such as sent, received, and read), the backend needs to ensures that only one instance of the message is processed.Handle Multiple Message Types: The backend can handle different types of messages, such as text, voice messages, and images. In order to not spread the scope of the artice we will only lay the foundation for images but not implement it to the end.Process with LLM-Agent Workflow: The extracted information is processed using the LLM-Agent workflow, which we have developed with previous parts of this series. You can also use another agentic implementation, e.g. Langchain or Langgraph5.1 Define Models and SchemasWe will receive a payload from a webhook. You can find example payloads in Meta’s documentation: Example PayloadI prefer to write my code with Pydantic to add type safety to my Python code. Moreover, type annotations and Pydantic are an optimal match for FastAPI applications. So, let’s first define the models used in our endpoint:# app/schema.pyfrom typing import List, Optional from pydantic import BaseModel, Field class Profile(BaseModel): name: str class Contact(BaseModel): profile: Profile wa_id: str class Text(BaseModel): body: strclass Image(BaseModel): mime_type: str sha256: str id: str class Audio(BaseModel): mime_type: str sha256: str id: str voice: bool class Message(BaseModel): from_: str = Field(…, alias=”from”) id: str timestamp: str text: Text | None = None image: Image | None = None audio: Audio | None = None type: strclass Metadata(BaseModel): display_phone_number: str phone_number_id: strclass Value(BaseModel): messaging_product: str metadata: Metadata contacts: List[Contact] | None = None messages: List[Message] | None = None class Change(BaseModel): value: Value field: str statuses: List[dict] | None = None class Entry(BaseModel): id: str changes: List[Change] class Payload(BaseModel): object: str entry: List[Entry]class User(BaseModel): id: int first_name: str last_name: str phone: str role: strclass UserMessage(BaseModel): user: User message: str | None = None image: Image | None = None audio: Audio | None = None5.2 Parse Incoming MessagesNext, we are going to create some helper functions for using dependency injection in FastAPI:# app/main.pyfrom app.domain import message_servicedef parse_message(payload: Payload) -> Message | None: if not payload.entry[0].changes[0].value.messages: return None return payload.entry[0].changes[0].value.messages[0] def get_current_user(message: Annotated[Message, Depends(parse_message)]) -> User | None: if not message: return None return message_service.authenticate_user_by_phone_number(message.from_) def parse_audio_file(message: Annotated[Message, Depends(parse_message)]) -> Audio | None: if message and message.type == “audio”: return message.audio return None def parse_image_file(message: Annotated[Message, Depends(parse_message)]) -> Image | None: if message and message.type == “image”: return message.image return None def message_extractor( message: Annotated[Message, Depends(parse_message)], audio: Annotated[Audio, Depends(parse_audio_file)], ): if audio: return message_service.transcribe_audio(audio) if message and message.text: return message.text.body return NoneParsing the Payload: The parse_message function extracts the first message from the incoming payload if it exists. This function returns None if no messages are found, so that only valid messages are processed.User Authentication: The get_current_user function uses the parse_message dependency injection to extract the message and then authenticates the user based on the phone number associated with the message. Here we ensure that only authenticated users are allowed to send messages.Audio and Image Parsing: These functions extract audio or image files from the message if the message type is “audio” or “image,” respectively. This allows the application to handle different types of media.Message Extraction: The message_extractor function attempts to extract text from the message or transcribe audio into text. This ensures that regardless of the message type, the content can be processed.Here we have one import from our domain layer. The whole script message_service is where we place all domain-specific code for this implementation, such as authenticate_user_by_phone_number and transcribe_audio.5.3 Implementing the POST Endpoint# app/main.pyimport threading from typing_extensions import Annotated from fastapi import APIRouter, Query, HTTPException, Depends from app.domain import message_service from app.schema import Payload, Message, Audio, Image, User # … rest of the code …@app.post(“/”, status_code=200) def receive_whatsapp( user: Annotated[User, Depends(get_current_user)], user_message: Annotated[str, Depends(message_extractor)], image: Annotated[Image, Depends(parse_image_file)], ): if not user and not user_message and not image: return {“status”: “ok”} if not user: raise HTTPException(status_code=401, detail=”Unauthorized”) if image: return print(“Image received”) if user_message: thread = threading.Thread( target=message_service.respond_and_send_message, args=(user_message, user) ) thread.daemon = True thread.start() return {“status”: “ok”}POST Endpoint Implementation: This endpoint handles the incoming POST request. It checks if the user, message, or image is valid. If none are valid, it simply returns a status message to CloudAPI. If the user is not authenticated, it raises an HTTPException with a 401 status code.Processing Images and Messages: If an image is received, we make a simple stdout print as a placeholder for future image handling. If a text message is received, it is processed asynchronously using a separate thread to avoid blocking the main application thread. The message_service.respond_and_send_message function is invoked to handle the message according to the LLM-Agent workflow.Explanation for Using Thread Pooling for the Webhook: WhatsApp will resend the webhook until it gets a 200 response, so thread pooling is used to ensure that message handling doesn’t block the webhook response.6 Message ServicesIn our presentation layer where we previously defined our endpoint, we use some message_service functions that need to be defined next. Specifically, we need an implementation for processing and transcribing audio payloads, authenticating users, and finally invoking our agent and sending a response back. We will place all this functionality inside domain/message_service.py. In production settings, as your application grows, I would recommend splitting them further down into, e.g., transcription_service.py, message_service.py, and authentication_service.py.In multiple functions in this section, we will make requests to the Meta API “https://graph.facebook.com/…”. In all of these requests, we need to include authorization headers with WHATSAPP_API_KEY, which we created in step 1.3, as the bearer token. I usually store API keys and tokens in an .env file and access them with the Python dotenv library. We also use the OpenAI client with your OPENAI_API_KEY, which could also be stored in the .env file.But for simplicity, let’s just place and initialize them at the top of message_service.py scripts as follows:import os import json import requests from typing import BinaryIOWHATSAPP_API_KEY = “YOUR_ACCESS_TOKEN”llm = OpenAI(api_key=”YOUR_OPENAI_API_KEY”)Replace “YOUR_ACCESS_TOKEN” with your actual access token that you created in step 1.3.6.1 Processing and Transcribing Audio FilesHandling voice records from a WhatsApp webhook is not as straightforward as it may seem. First of all, it is important to know that the incoming webhook only tells us the data type and an object ID. So it does not contain the binary audio file. We first have to download the audio file using Meta’s Graph API. To download our received audio, we need to make two sequential requests. The first one is a GET request with the object_id to obtain the download URL. This download URL is the target of our second GET request.def download_file_from_facebook(file_id: str, file_type: str, mime_type: str) -> str | None: # First GET request to retrieve the download URL url = f”https://graph.facebook.com/v19.0/{file_id}” headers = {“Authorization”: f”Bearer {WHATSAPP_API_KEY}”} response = requests.get(url, headers=headers) if response.status_code == 200: download_url = response.json().get(‘url’) # Second GET request to download the file response = requests.get(download_url, headers=headers) if response.status_code == 200: # Extract file extension from mime_type file_extension = mime_type.split(‘/’)[-1].split(‘;’)[0] # Create file_path with extension file_path = f”{file_id}.{file_extension}” with open(file_path, ‘wb’) as file: file.write(response.content) if file_type == “image” or file_type == “audio”: return file_path raise ValueError(f”Failed to download file. Status code: {response.status_code}”) raise ValueError(f”Failed to retrieve download URL. Status code: {response.status_code}”)Here, we basically get the download URL and download the file to the file system using the object ID and the file extension as its file_path. If something fails, we raise a ValueError that indicates where the error occurred.Next, we simply define a function that takes the audio binary and transcribes it using Whisper:def transcribe_audio_file(audio_file: BinaryIO) -> str: if not audio_file: return “No audio file provided” try: transcription = llm.audio.transcriptions.create( file=audio_file, model=”whisper-1″, response_format=”text” ) return transcription except Exception as e: raise ValueError(“Error transcribing audio”) from eAnd finally, let’s bring the download and transcription functions together:def transcribe_audio(audio: Audio) -> str: file_path = download_file_from_facebook(audio.id, “audio”, audio.mime_type) with open(file_path, ‘rb’) as audio_binary: transcription = transcribe_audio_file(audio_binary) try: os.remove(file_path) except Exception as e: print(f”Failed to delete file: {e}”) return transcription6.2 Authenticate UsersWhile using the test number provided by Meta, we have to predefine which numbers our chatbot can send messages to. I am not quite sure and have not tested if any number can send a message to our chatbot. But anyway, as soon as we switch to a custom number, we don’t want anyone to be able to execute our agent chatbot. So we need a method to authenticate the user. We have several options to do this. First of all, we have to think of where to store user information. We could use, for example, a database like PostgreSQL or a non-relational database like Firestore. We can predefine our users in the file system in a JSON file or in an .env file. For this tutorial, I will go with the simplest way and hardcode the user within a list in our authentication function.A list entry has the structure of the User model as defined in step 5.1. So a user consists of an ID, first name, last name, and phone number. We have not implemented a role system in our agent workflow yet. But in most use cases with different users, such as in the example case of a small business assistant, different users will have different rights and access scopes. For now, we just pass “default” as a placeholder role.def authenticate_user_by_phone_number(phone_number: str) -> User | None: allowed_users = [ {“id”: 1, “phone”: “+1234567890”, “first_name”: “John”, “last_name”: “Doe”, “role”: “default”}, {“id”: 2, “phone”: “+0987654321”, “first_name”: “Jane”, “last_name”: “Smith”, “role”: “default”} ] for user in allowed_users: if user[“phone”] == phone_number: return User(**user) return NoneSo just verify if the phone number is in our list of allowed_users and return the user if it is. Otherwise, we return None. If you look at our endpoint in step 5.3, you will see we raise an error if the user is None to prevent further processing of unauthorized user messages.6.3 Send MessageNow, our last helper function before we can actually invoke our agent is send_whatsapp_message. I have included two modes into this function because of some Meta-specific WhatsApp API logic.Basically, you are not allowed to send a custom message to a user as a conversation starter. This means you can respond with an individual text message if the user starts the conversation and writes a message to the chatbot first. Otherwise, if you want the chatbot to initiate a conversation, you are limited to approved templates, like the “Hello World” template.Also important to mention, when we talk about Meta logic, a conversation after being started opens a conversation window of 24 hours in which you can send messages to that user. This conversation window is also what gets charged, not the individual message. It gets a bit more complex based on the type of conversation, such as marketing, support, etc.You can also define a template on your own and let it be approved by Meta. I have not done that at this point, so to test if we can send a message from our backend to a user, I use the “Hello World” template. If you add some custom approved templates, you can also use this function to send them to the user.So back to the code. To send a message, we make a POST request and define a payload that either includes the text body or the template:def send_whatsapp_message(to, message, template=True): url = f”https://graph.facebook.com/v18.0/289534840903017/messages” headers = { “Authorization”: f”Bearer ” + WHATSAPP_API_KEY, “Content-Type”: “application/json” } if not template: data = { “messaging_product”: “whatsapp”, “preview_url”: False, “recipient_type”: “individual”, “to”: to, “type”: “text”, “text”: { “body”: message } } else: data = { “messaging_product”: “whatsapp”, “to”: to, “type”: “template”, “template”: { “name”: “hello_world”, “language”: { “code”: “en_US” } } } response = requests.post(url, headers=headers, data=json.dumps(data)) return response.json()6.4 Invoke Our AgentFinally, we can integrate our agent from our previous examples. At this stage, you can also integrate your custom agent, a Langchain AgentExecutor, Langgraph AgentWorkflow, etc.So our main function that will be called on each incoming message is respond_and_send_message, which takes the user_message string and passes it to our agent workflow as the input object.# app/domain/message_service.pyimport json import requestsfrom app.domain.agents.routing_agent import RoutingAgent from app.schema import User def respond_and_send_message(user_message: str, user: User): agent = RoutingAgent() response = agent.run(user_message, user.id) send_whatsapp_message(user.phone, response, template=False)After invoking our agent, we get a response message that we want to send back to the user using the send_whatsapp_message function.Now you should be able to send messages to the test number and get answer by the agent executor. Remark: While using the Whatsapp test number you have to register phone numbers that are allowed to send messages to your bot in you Meta API app.By following this guide, you’ve taken a big step toward creating a strong LLM-powered chatbot that works seamlessly with WhatsApp. This isn’t just about setting up automated business communication in real-time; it’s about laying the groundwork for more advanced AI-driven workflows down the road.What have we done:WhatsApp Cloud API Integration:We got Meta’s Cloud API for WhatsApp up and running, including building a webhook for live message handling.Backend Development with FastAPI:Set up endpoints to verify webhooks, process incoming messages, and handle responses asynchronously.Multimedia & Authentication Support:Enabled stuff like audio transcription, and user authentication for more personalized interactions. Image handling is not implemented to the fullest but we have added the possibility for it.Whats Next:In the next part(s), which I promise to publish sooner 🙏 I will move the implementation to LangGraph. I will add some more capabilities to the agent like creating database tables + tools on its one. Which will make the Agent more flexible. I am also open for Feedback and ideas what to Features to add!Wrapping It Up:Combining the reach and usability of WhatsApp with LLMs is a big win for businesses and personal use cases. Whether you’re aiming for a personal assistant or a full-blown business tool, this guide gives you the path to get there. Keep tinkering, improving, and pushing boundaries — this is just the start of what you can build.Happy coding! 🚀You can find the full code here: Github RepoFull Link: https://github.com/elokus/WhatsappAgentCreating a WhatsApp AI Agent with GPT-4o was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • How (and Where) ML Beginners Can Find Papers
    by Pascal Janetzky on December 22, 2024 at 3:01 pm

    From conferences to surveysContinue reading on Towards Data Science »

  • What Every Aspiring Machine Learning Engineer Must Know to Succeed
    by Claudia Ng on December 22, 2024 at 2:07 pm

    Your Guide to Avoiding Critical Errors with Machine Learning in ProductionContinue reading on Towards Data Science »

  • Propensity-Score Matching Is the Bedrock of Causal Inference
    by Ari Joury, PhD on December 22, 2024 at 12:02 pm

    And how to get started with it using PythonContinue reading on Towards Data Science »

Newsletter

Join our newsletter to get the free update, insight, promotions.