How to Automate Social Media Posting with n8n: Schedule, Cross-Post, and Repurpose Content
Social media is one of those things that everyone knows is important but nobody has enough time for. You write a great blog post, share it once on Twitter, maybe post it on LinkedIn if you remember, and then it disappears into the void. Two weeks later you are staring at an empty content calendar wondering why your social presence is not growing.
I know this cycle well. As a startup consultant working with founders across Latin America, I have seen talented teams with great products who simply cannot keep up with the demands of consistent social media posting. They are too busy building, selling, and supporting customers to spend an hour every day crafting platform-specific posts.
That is why I built a social media automation system with n8n that takes a single piece of content — a blog post, a newsletter, a product update — and automatically generates, formats, and schedules platform-specific social media posts across Twitter, LinkedIn, and Instagram. The whole system runs in the background with zero daily effort.
In this guide, I will show you exactly how I built it and how you can set up the same system for your business.
Why Automate Social Media with n8n Instead of Buffer or Hootsuite?
Fair question. There are plenty of social media scheduling tools out there. Here is why I prefer n8n:
Cost. Buffer and Hootsuite charge monthly fees that scale with the number of channels and scheduled posts. n8n, whether self-hosted or on the cloud plan, gives you unlimited flexibility without per-post pricing.
Content generation. The scheduling tools let you write and schedule posts, but you still have to write them. With n8n, I integrate AI (OpenAI, Claude, or any LLM) directly into the workflow to generate platform-specific variations of my content automatically.
Deep integrations. n8n does not just post to social media. It connects to my blog’s RSS feed, my CMS, my analytics tools, and my email marketing platform. This means I can build end-to-end content pipelines that go far beyond simple scheduling.
Custom logic. I can add rules like “do not post on weekends,” “space posts at least 4 hours apart,” “never post more than 3 times per day on Twitter,” or “repost evergreen content every 90 days.” Try doing that in Buffer.
Full control over data. Everything runs on my infrastructure. My API keys, my content, my posting schedule — none of it sits in a third-party platform that could change its pricing or terms overnight.
The Content Repurposing Pipeline: Overview
Here is the complete pipeline I built:
1. Content Detection — Monitor blog RSS feed, CMS webhooks, or manual triggers for new content
2. Content Extraction — Pull the full article text and extract key points, quotes, and takeaways
3. Post Generation — Use AI to generate platform-specific social posts (Twitter threads, LinkedIn posts, Instagram captions)
4. Review Queue (Optional) — Stage posts for human review before publishing
5. Scheduled Publishing — Post to each platform at optimal times with proper formatting
Let me break down each stage.
Stage 1: Content Detection
The pipeline starts with a trigger that detects new content. I use three different trigger methods depending on the content source:
RSS Feed Trigger. For blog posts published on WordPress, Ghost, or any platform with an RSS feed, I use the RSS Feed Read node set to check every hour. When a new post appears in the feed, the workflow fires. The RSS node gives me the post title, URL, publication date, and a summary.
Webhook Trigger. For content that comes from a headless CMS like Strapi or Contentful, I use a Webhook node that receives a payload whenever a new post is published. This is more immediate than RSS polling.
Manual Trigger. Sometimes I want to repurpose content that is not from my blog — a conference talk, a podcast episode, or a product announcement. For these, I use a Manual Trigger node and paste the content into an input form.
All three triggers converge into a single processing pipeline through a Merge node.
Stage 2: Content Extraction
Once the workflow has the source content, the next step is extracting the key information that will feed the social post generation.
For blog posts detected via RSS, the feed only gives me a summary. I need the full article text. I use the HTTP Request node to fetch the blog post URL, and then an HTML Extract node to pull the article body, stripping out navigation, headers, footers, and ads. The result is clean text of the full article.
Next, I use a Function node to perform basic text analysis: extracting the article’s headline, subheadings, any statistics or numbers mentioned, pull quotes, and the conclusion. These elements are the raw material for generating social posts.
For longer articles, I also use an AI node (the OpenAI node with GPT-4 or the HTTP Request node calling the Claude API) to generate a concise summary of the article’s key takeaways. I prompt the model to identify the three most important points, any counterintuitive insights, and the primary call to action.
Getting Started with n8n
If you want to follow along and build this pipeline yourself, try n8n here. The cloud version is the easiest way to get started — you can connect your social media accounts, set up your first workflow, and start automating within minutes.
Stage 3: AI-Powered Post Generation
This is where the magic happens. I use AI to generate platform-specific social media posts from the extracted content. The key insight is that each platform has different norms, character limits, and engagement patterns, so you need tailored content for each one.
Twitter Posts. I generate two types of Twitter content:
First, a single tweet that captures the article’s main insight in under 280 characters, includes 1-2 relevant hashtags, and ends with a link to the full post. I prompt the AI with specific instructions: “Write a tweet that is provocative or surprising, uses conversational language, and makes people want to click the link. Do not use generic phrases like ‘check out my new post.’ Start with a bold statement or statistic.”
Second, a Twitter thread (3-5 tweets) that breaks down the article’s key points. The first tweet in the thread hooks the reader with the most compelling insight. Subsequent tweets deliver the supporting points. The final tweet includes the link and a call to action. I prompt the AI to number the tweets and keep each one under 280 characters.
LinkedIn Posts. LinkedIn rewards longer-form content and professional insights. I generate a LinkedIn post of 150-300 words that tells a mini-story: a problem the reader relates to, the insight from the article that addresses it, and a question or call to action that encourages comments. I instruct the AI to use line breaks for readability (LinkedIn’s algorithm rewards posts that keep people reading), avoid hashtags in the body (they look spammy on LinkedIn), and add 3-5 relevant hashtags at the end.
Instagram Captions. For Instagram, the post needs to work without a link (since Instagram does not support clickable links in captions). I generate a caption of 100-200 words that provides standalone value, uses relevant emojis sparingly, and includes 10-15 hashtags. I also generate alt text for accessibility. For the image itself, I use the HTTP Request node to call an image generation API (like DALL-E or a Canva template API) that creates a branded graphic with the article’s headline overlaid on a template.
All generated content is stored in a Set node that creates a structured object with properties for each platform’s post text, hashtags, image URLs, and scheduled posting time.
Stage 4: Review Queue (Optional)
For some clients, I include a human review step before posts go live. The workflow sends all generated posts to a Google Sheet or Notion database where a team member can review, edit, and approve them.
The review queue works like this: the workflow creates a row in Google Sheets for each generated post, with columns for the platform, the post text, the scheduled time, and a status column set to “Pending Review.” A team member reviews the posts, makes any edits, and changes the status to “Approved.”
A separate n8n workflow runs on a Schedule node every 30 minutes, checking the Google Sheet for newly approved posts. When it finds approved posts, it triggers the publishing step.
For my own content, I skip this step and publish directly. After tuning the AI prompts over a few weeks, the generated posts are consistently good enough to go out without manual editing. But for clients who are sensitive about brand voice or operate in regulated industries, the review step is essential.
Stage 5: Scheduled Publishing
The final stage publishes posts to each platform at the optimal time.
Timing strategy. I do not post to all platforms simultaneously. Research and my own analytics show that different platforms have different peak engagement windows:
– Twitter: I post the main tweet within 2 hours of the blog post going live (to capitalize on real-time engagement), and the thread the next morning at 9 AM in the target audience’s time zone.
– LinkedIn: I post the next business day at 8 AM or 12 PM in the target audience’s time zone. LinkedIn engagement peaks during business hours.
– Instagram: I post 2-3 days after the blog post, at 6 PM in the target audience’s time zone. Instagram is more of an evening platform.
I implement this timing using Wait nodes combined with a Function node that calculates the next optimal posting time based on the current day and time. The Function node a