How to Secure Your AI Workflows in Make.com with grimly

By Scott Busby · 7 min read

Introduction

As businesses scale their use of AI, automation platforms like Make.com are becoming essential for connecting services, streamlining workflows, and triggering LLM-driven tasks in real time. But with that convenience comes risk—particularly when user inputs are fed directly into large language models (LLMs) without proper validation.

Prompt injection is a growing category of AI-specific attack where malicious or cleverly crafted prompts are used to manipulate the behavior of your LLM. These attacks can result in data leakage, broken safeguards, offensive output, or even unauthorized actions within your workflow. Worse yet, they’re easy to trigger and hard to detect using traditional input sanitization methods.

That’s where grimly.ai comes in. grimly.ai acts as a purpose-built security layer that sits between your input source and your AI model. It automatically analyzes prompts, detects potentially dangerous behavior, and blocks unsafe input before it ever reaches the model.

In this post, you'll learn exactly how to integrate grimly’s /classify/full endpoint into a Make.com scenario to secure your AI pipeline. No custom backend needed—just low-code security for high-stakes workflows.

What You’ll Build

This tutorial will walk you through building a Make.com scenario that injects AI security into your workflow—without requiring custom code. By the end, you’ll have a functional pipeline that automatically evaluates user input before it ever reaches an LLM, ensuring only safe, compliant, and authorized prompts are processed.

You’ll configure a series of Make.com modules that:

grimly.ai API Overview: /classify/full

You’ll use the grimly.ai classifier API to evaluate user prompts in real time.

POST https://grimly_API/api/v1/classify/full

Headers

Body Parameters

Step-by-Step Integration in Make.com

Step 1: Set Up the Webhook

Create a webhook module in Make.com to receive the user prompt from a chatbot, form, or API source.

Step 2: Add grimly.ai Classifier (HTTP Module)

Add a new HTTP module configured as:

Body:

{
  "user_prompt": "{{prompt_from_webhook}}",
  "end_user_ip_address": "{{ip}}",
  "end_user_identifier": "{{user_id}}",
  "ai_model": "gpt-4"
}

Step 3: Add a Router

Use a router module to branch the flow depending on the grimly.ai response:

Step 4: Call the LLM (Optional)

If the prompt is allowed, pass it to your LLM module (OpenAI, Claude, etc.) and continue the workflow.

Why This Integration Matters

Make.com allows you to automate powerful workflows without writing code—but when those workflows include LLMs, you also inherit the risks that come with them. AI systems are highly sensitive to the quality and intent of the input they receive, and even a single malicious or misaligned prompt can derail the entire process.

By inserting grimly.ai into the early stages of your Make.com flow, you introduce a vital layer of AI-native security. Instead of trusting that inputs are safe—or reacting after something breaks—you proactively vet every message before it ever reaches your model.

This simple yet powerful integration delivers:

In short, this integration turns your AI pipeline from a hopeful experiment into a hardened, production-ready automation flow.

Conclusion

The ease of integrating large language models into workflows via Make.com is a double-edged sword: you gain power and speed, but also inherit risk. If your automation includes AI decision-making or content generation, it's your responsibility to ensure that every prompt entering your system is safe, compliant, and free from manipulation.

With just a few Make.com modules and one API call to grimly.ai, you can go from insecure to secure—blocking threats in real time, gaining visibility into user behavior, and laying the foundation for trustworthy AI operations. The best part? You don’t need to stand up infrastructure, hire a security engineer, or refactor your stack.

Whether you’re building a customer-facing AI assistant, an internal knowledge agent, or a complex automation suite, grimly.ai gives you a plug-and-play way to enforce responsible AI usage at scale. And this is just the beginning. Our platform supports output filtering, prompt redaction, abuse analytics, team-based policy controls, and more.

AI security isn’t optional—it’s the new default. Start protecting your workflows today with grimly.ai + Make.com, and take the first step toward hardened, production-grade AI systems.


Equip your AI with grimly.ai — start safeguarding your LLM systems now →

Hungry for deeper dives? Explore the grimly.ai blog for expert guides, adversarial prompt tips, and the latest on LLM security trends.


Scott Busby
Founder of grimly.ai and LLM security red team practitioner.