Introduction
As businesses scale their use of AI, automation platforms like Make.com are becoming essential for connecting services, streamlining workflows, and triggering LLM-driven tasks in real time. But with that convenience comes risk—particularly when user inputs are fed directly into large language models (LLMs) without proper validation.
Prompt injection is a growing category of AI-specific attack where malicious or cleverly crafted prompts are used to manipulate the behavior of your LLM. These attacks can result in data leakage, broken safeguards, offensive output, or even unauthorized actions within your workflow. Worse yet, they’re easy to trigger and hard to detect using traditional input sanitization methods.
That’s where grimly.ai comes in. grimly.ai acts as a purpose-built security layer that sits between your input source and your AI model. It automatically analyzes prompts, detects potentially dangerous behavior, and blocks unsafe input before it ever reaches the model.
In this post, you'll learn exactly how to integrate grimly’s /classify/full
endpoint into a Make.com scenario to secure your AI pipeline. No custom backend needed—just low-code security for high-stakes workflows.
What You’ll Build
This tutorial will walk you through building a Make.com scenario that injects AI security into your workflow—without requiring custom code. By the end, you’ll have a functional pipeline that automatically evaluates user input before it ever reaches an LLM, ensuring only safe, compliant, and authorized prompts are processed.
You’ll configure a series of Make.com modules that:
- Receive user input from a webhook, form, chatbot, or another connected app
- Send that input to grimly using a POST request to the
/classify/full
API - Evaluate grimly’s response to determine whether the prompt is blocked or allowed
- Route clean input to your LLM module (like OpenAI or Claude) for processing
- Block unsafe prompts before they trigger any downstream action
grimly.ai API Overview: /classify/full
You’ll use the grimly.ai classifier API to evaluate user prompts in real time.
POST https://grimly_API/api/v1/classify/full
Headers
Content-Type: application/json
X-API-Key: your_api_key
Body Parameters
user_prompt
(string, required)end_user_ip_address
(optional)end_user_identifier
(optional)ai_model
(optional, e.g.,gpt-4
)
Step-by-Step Integration in Make.com
Step 1: Set Up the Webhook
Create a webhook module in Make.com to receive the user prompt from a chatbot, form, or API source.
Step 2: Add grimly.ai Classifier (HTTP Module)
Add a new HTTP module configured as:
- Method: POST
- URL:
https://grimly_API/api/v1/classify/full
- Headers: Add
Content-Type: application/json
and yourX-API-Key
Body:
{
"user_prompt": "{{prompt_from_webhook}}",
"end_user_ip_address": "{{ip}}",
"end_user_identifier": "{{user_id}}",
"ai_model": "gpt-4"
}
Step 3: Add a Router
Use a router module to branch the flow depending on the grimly.ai response:
blocked = true
: end the flow or send an error messageblocked = false
: proceed to call the LLM
Step 4: Call the LLM (Optional)
If the prompt is allowed, pass it to your LLM module (OpenAI, Claude, etc.) and continue the workflow.
Why This Integration Matters
Make.com allows you to automate powerful workflows without writing code—but when those workflows include LLMs, you also inherit the risks that come with them. AI systems are highly sensitive to the quality and intent of the input they receive, and even a single malicious or misaligned prompt can derail the entire process.
By inserting grimly.ai into the early stages of your Make.com flow, you introduce a vital layer of AI-native security. Instead of trusting that inputs are safe—or reacting after something breaks—you proactively vet every message before it ever reaches your model.
This simple yet powerful integration delivers:
- Prompt injection protection: Catch jailbreak attempts and adversarial input before it compromises your AI output or causes model misbehavior.
- PII redaction and compliance: Use grimly’s advanced classifiers and redactors to automatically strip sensitive or regulated information before processing.
- Audit-ready metadata: Every prompt is logged, hashed, and enriched with context—perfect for debugging, analytics, or meeting internal security standards.
- Workflow reliability: Eliminate unexpected LLM responses caused by poorly formatted or malicious prompts, ensuring consistency and trustworthiness.
- Zero trust for AI input: Treat user content as untrusted by default—grimly.ai enforces that principle with every API call.
In short, this integration turns your AI pipeline from a hopeful experiment into a hardened, production-ready automation flow.
Conclusion
The ease of integrating large language models into workflows via Make.com is a double-edged sword: you gain power and speed, but also inherit risk. If your automation includes AI decision-making or content generation, it's your responsibility to ensure that every prompt entering your system is safe, compliant, and free from manipulation.
With just a few Make.com modules and one API call to grimly.ai, you can go from insecure to secure—blocking threats in real time, gaining visibility into user behavior, and laying the foundation for trustworthy AI operations. The best part? You don’t need to stand up infrastructure, hire a security engineer, or refactor your stack.
Whether you’re building a customer-facing AI assistant, an internal knowledge agent, or a complex automation suite, grimly.ai gives you a plug-and-play way to enforce responsible AI usage at scale. And this is just the beginning. Our platform supports output filtering, prompt redaction, abuse analytics, team-based policy controls, and more.
AI security isn’t optional—it’s the new default. Start protecting your workflows today with grimly.ai + Make.com, and take the first step toward hardened, production-grade AI systems.
Equip your AI with grimly.ai — start safeguarding your LLM systems now →
Hungry for deeper dives? Explore the grimly.ai blog for expert guides, adversarial prompt tips, and the latest on LLM security trends.
Scott Busby
Founder of grimly.ai and LLM security red team practitioner.