Introduction
Many Salesforce integrations fail not because teams lack tools, but because they repeat the same design mistakes again and again. These mistakes often work at small scale, pass initial testing, and then collapse in production under real load. In this article, we explain the most common Salesforce integration anti-patterns in simple words. Each anti-pattern includes real-world examples, what teams usually notice first, why it feels confusing, and what a better approach looks like.
Anti-Pattern 1: Treating Salesforce Like a Real-Time Database
This anti-pattern happens when systems read and write to Salesforce for every small action.
Real-world example
Using Salesforce like a live database is like calling a customer service agent every time you need one small detail instead of checking a local note. It works for a few calls, but not all day.
What teams usually notice
Slow user experience during peak hours
Random timeouts and API failures
API limits hit earlier than expected
Why it feels confusing
The same code works fine during testing, but fails only when many users are active.
Better approach
Use Salesforce for system-of-record data, cache frequently used information, and move non-critical work to asynchronous flows.
Anti-Pattern 2: Polling Salesforce Instead of Listening for Events
Polling means repeatedly checking Salesforce for changes.
Real-world example
This is like checking your email inbox every minute instead of letting notifications tell you when a new message arrives.
What teams usually notice
High API usage even when data rarely changes
Nightly jobs taking longer every month
Rising infrastructure and licensing costs
Better approach
Use Platform Events or Change Data Capture so Salesforce notifies systems only when something actually changes.
Anti-Pattern 3: No Clear Data Ownership
This happens when multiple systems update the same Salesforce records freely.
What teams usually notice
Fields changing unexpectedly
Data reverting to older values
Teams blaming each other for bad data
Why this breaks systems
Without ownership rules, updates overwrite each other and retries create conflicts.
Better approach
Define one owner per object or field. Other systems should treat the data as read-only or derived.
Anti-Pattern 4: Blind Retries Without Backoff
Retries are added to fix failures, but done incorrectly.
Real-world example
It’s like pressing an elevator button repeatedly when it’s already overloaded.
What teams usually notice
Failures increase instead of decrease
API usage spikes during incidents
Systems collapse under retry storms
Better approach
Retry only recoverable errors, use exponential backoff with jitter, and add circuit breakers.
Anti-Pattern 5: Ignoring Partial Failures
Some records succeed while others fail, especially with bulk operations.
What teams usually notice
Reports don’t match expectations
Missing or duplicated records
Problems discovered weeks later
Why this is dangerous
Partial failures look like success at first but silently corrupt data.
Better approach
Always process result files, retry only failed records, and log failures clearly.
Anti-Pattern 6: Hard-Coding Salesforce Schema Assumptions
This happens when integrations assume fields will never change.
Real-world example
It’s like building a house assuming the road name will never change.
What teams usually notice
Better approach
Treat schema as a contract, design backward-compatible changes, and test integrations after schema updates.
Anti-Pattern 7: No Observability or Alerts
Integrations run quietly until something breaks.
What teams usually notice
Issues found by business users, not monitoring
Long incident resolution times
Finger-pointing between teams
Better approach
Monitor API usage, error rates, latency, retries, and data drift. Alerts should trigger before users complain.
Anti-Pattern 8: Deploying Without Rollback Plans
Changes go live with no safety net.
Real-world example
It’s like updating accounting formulas without keeping a backup copy.
What teams usually notice
Panic during incidents
Manual data fixes
Long downtime
Better approach
Version releases, use feature flags, and practice rollback and recovery regularly.
Anti-Pattern 9: Mixing Real-Time and Batch Logic Carelessly
Real-time and batch jobs affect each other.
What teams usually notice
Better approach
Separate real-time and batch workloads, schedule heavy jobs off-peak, and protect real-time flows.
Anti-Pattern 10: Assuming Salesforce Will Always Be Available
Salesforce is reliable, but not perfect.
What teams usually notice
Better approach
Design for failure: queue requests, pause safely, and recover gracefully after outages.
Who Should Care About These Anti-Patterns
This topic is especially important for:
When These Anti-Patterns Become Dangerous
They become critical when:
Traffic grows
More systems integrate with Salesforce
Data accuracy becomes business-critical
Teams scale and ownership becomes unclear
Summary
Salesforce integration anti-patterns often appear harmless at first but cause serious production issues as systems scale. Treating Salesforce like a real-time database, polling instead of using events, unclear data ownership, blind retries, ignoring partial failures, hard-coded schemas, lack of observability, missing rollback plans, and careless workload mixing all lead to fragile integrations. By recognizing and avoiding these anti-patterns early, teams can build Salesforce integrations that are scalable, reliable, and easier to operate in real-world production environments.