Skip to main content

🤖 AIBot in LangChain with OpenAI and Whinself

In this tutorial, we'll create a simple chatBot that uses LangChain and OpenAI to process messages and integrates with Whinself to connect the chatBot's input/output with WhatsApp. We'll cover the following steps:

  • Setting up a Flask-based webhook endpoint to receive WhatsApp messages from Whinself.
  • Parsing incoming message events (both Conversation and ExtendedTextMessage).
  • Integrating with LangChain and OpenAI to generate responses.
  • Sending the response back to WhatsApp via Whinself using a Python function.
  • Testing the overall setup.

Prerequisites​

  • Python 3.8+
  • A self-hosted Whinself instance with its API running locally.
  • An OpenAI API key.
  • Basic familiarity with Python and Flask.
  • pip for installing dependencies.

Step 1: Install Required Python Packages​

Install the necessary packages using pip:

pip install flask requests langchain openai

Step 2: Create the Flask Webhook Endpoint​

Create a file named app.py and add the following code:

from flask import Flask, request, jsonify
import requests
import json

app = Flask(__name__)

# Configure your local Whinself API endpoint and target WhatsApp JID
WHINSSELF_API_URL = "http://localhost:8888/wspout" # Adjust this URL if your Whinself API is hosted elsewhere
TARGET_WHATSAPP_JID = "[email protected]" # Replace with the target WhatsApp JID

def process_message_with_chatbot(message: str) -> str:
"""
Process the incoming message using LangChain and OpenAI.
For a production system, integrate LangChain as needed.
This example simply echoes the message.
"""
# Example integration with LangChain and OpenAI (uncomment and modify with your API key):
#
# from langchain.llms import OpenAI
# llm = OpenAI(api_key="YOUR_OPENAI_API_KEY")
# response = llm(message)
# return response.strip()
#
# For now, we just return an echo:
return f"Echo: {message}"

def send_response_to_whatsapp(response_text: str):
"""
Send a response back to WhatsApp via Whinself using a Python function.
"""
payload = {
"text": response_text,
"jid": TARGET_WHATSAPP_JID
}
headers = {"Content-Type": "application/json"}
try:
r = requests.post(WHINSSELF_API_URL, json=payload, headers=headers)
print("Response sent to WhatsApp:", r.text)
except Exception as e:
print("Error sending response:", e)

@app.route("/webhook", methods=["POST"])
def webhook():
"""
Endpoint to receive messages from Whinself.
"""
try:
payload = request.get_json(force=True)
except Exception as e:
return jsonify({"status": "error", "error": "Invalid JSON"}), 400

print("Received payload:")
print(json.dumps(payload, indent=2))

# Parse the incoming message event
message_text = ""
if "conversation" in payload:
# Conversation Message Event: plain text message
message_text = payload["conversation"]
elif "text" in payload:
# ExtendedTextMessage Event: message with additional metadata
message_text = payload["text"]
# Additional fields such as title, description, canonicalUrl, previewType, and mentionedJid
# are available in the payload if needed.
else:
return jsonify({"status": "ignored", "reason": "Unknown message format"}), 200

# Process the message using the chatbot (LangChain/OpenAI)
response_text = process_message_with_chatbot(message_text)

# Send the response back to WhatsApp via Whinself
send_response_to_whatsapp(response_text)

return jsonify({"status": "ok"}), 200

if __name__ == "__main__":
# Run the Flask app on port 8000 (or your preferred port)
app.run(host="0.0.0.0", port=8000, debug=True)

Step 3: Configure Whinself​

Ensure your self-hosted Whinself instance is running and accessible. Update your config.json file for Whinself to point the nrurl (the webhook URL) to your Flask endpoint. For example:

{
"slotid": "my-whatsapp-bot",
"nrurl": "http://localhost:8000/webhook",
"portin": 9001,
"devicename": "Linux WhatsApp Bot",
"debuglevel": 0,
"logstdout": true,
"logsse": true
}

Make sure to restart Whinself after making changes to its configuration.


Step 4: Testing the ChatBot​

  1. Start your Flask webhook server:

    python app.py
  2. Confirm that your Whinself instance is running and that the nrurl in its configuration points to your Flask server (e.g., http://localhost:8000/webhook).

  3. Send a message from WhatsApp to your bot, or simulate a POST request to the webhook endpoint with a tool like Postman or curl.

  4. The Flask server should:

    • Receive and parse the incoming message.
    • Process it using the chatBot function (process_message_with_chatbot).
    • Send a response back to WhatsApp via Whinself using the send_response_to_whatsapp function.
    • Log the incoming and outgoing messages in the console.

Step 5: Integrating with LangChain and OpenAI​

To replace the simple echo with a full chatBot integration:

  1. Uncomment and modify the code in the process_message_with_chatbot function.
  2. Ensure you have set your OpenAI API key.
  3. Follow the LangChain documentation for more advanced integrations.

For example, updating the function might look like this:

def process_message_with_chatbot(message: str) -> str:
from langchain.llms import OpenAI
llm = OpenAI(api_key="YOUR_OPENAI_API_KEY")
response = llm(message)
return response.strip()

Replace "YOUR_OPENAI_API_KEY" with your actual OpenAI API key.


Conclusion​

You've now built a basic chatBot using Flask, LangChain, and OpenAI that integrates with Whinself to connect WhatsApp messaging to your bot's processing logic. This setup provides a framework for further enhancements, such as handling additional message fields and building more complex conversation flows.

Happy coding!