Getting Started
Welcome to the BWS X SDK documentation! This guide will help you get started quickly.
What is BWS X SDK?
BWS X SDK is a Node.js library for interacting with X/Twitter that prioritizes cost optimization by using web scraping with Playwright for read operations, while maintaining API support for write operations and fallback scenarios.
Key Features
- 💰 Cost-Optimized: 90%+ cost reduction vs official X API
- 🔄 Hybrid Mode: Automatic fallback between scraping and API
- 🔐 Multi-Account Support: Account rotation with cooldown management
- 🎯 KOL Identification: AI-powered influencer discovery
- 📊 Real-time Webhooks: HMAC-signed notifications
- 🌐 Proxy Support: Built-in Oxylabs, BrightData support
Installation
Install the package using your preferred package manager:
npm install @blockchain-web-services/bws-x-sdk-nodeyarn add @blockchain-web-services/bws-x-sdk-nodepnpm add @blockchain-web-services/bws-x-sdk-nodeQuick Start
1. Setup Environment Variables
Create a .env file in your project root:
# Crawler Accounts (Primary - for cost optimization)
X_ACCOUNTS='[{
"id": "account1",
"username": "@myaccount",
"cookies": {
"auth_token": "your-auth-token-here",
"ct0": "your-ct0-token-here"
}
}]'
# Twitter API (For posting and fallback)
TWITTER_ACCOUNTS='[{
"name": "main",
"apiKey": "your-api-key",
"apiSecret": "your-api-secret",
"accessToken": "your-access-token",
"accessSecret": "your-access-secret"
}]'
# Proxy (Required for production scraping)
PROXY_CONFIG='{"enabled":true,"provider":"oxylabs","username":"your-proxy-user","password":"your-proxy-pass"}'
# SDK Options
SDK_OPTIONS='{"mode":"hybrid","preferredMode":"crawler"}'2. Initialize the Client
import { XTwitterClient } from '@blockchain-web-services/bws-x-sdk-node';
import 'dotenv/config';
// Initialize client (auto-loads from environment variables)
const client = new XTwitterClient();3. Start Using the SDK
// Get a tweet
const tweet = await client.getTweet('1234567890123456789');
console.log(tweet.text);
console.log('Likes:', tweet.metrics.likeCount);
// Get user profile
const profile = await client.getProfile('vitalikbuterin');
console.log(profile.name);
console.log('Followers:', profile.metrics.followersCount);
// Search tweets
const results = await client.searchTweets('#bitcoin', { maxResults: 10 });
results.forEach(tweet => {
console.log(`@${tweet.authorUsername}: ${tweet.text}`);
});
// Post a reply
const reply = await client.postReply(
'1234567890123456789',
'Great insight! 🚀'
);
console.log('Reply posted:', reply.id);Getting Credentials
Crawler Account Cookies
To use web scraping, you need X/Twitter authentication cookies.
Option 1: Automated Extraction (Recommended) ⚡
Use our automated cookie extraction tool:
npx bws-x-setup-cookiesThis interactive wizard will:
- ✅ Open X/Twitter in a real browser
- ✅ Wait for you to login
- ✅ Automatically extract required cookies (
auth_token,ct0,guest_id) - ✅ Validate that cookies work
- ✅ Warn about expiration (~30 days)
- ✅ Support multiple accounts
- ✅ Save to
.envfile - ✅ Export to multiple formats (JSON, YAML, TypeScript)
Security features:
- Automated validation
- Expiration tracking
- No manual copy-paste errors
For detailed usage, see the Cookie Setup Guide.
Option 2: Manual Extraction
If the automated tool doesn't work, you can extract cookies manually:
- Login to X/Twitter in your browser
- Open Developer Tools (F12 or right-click → Inspect)
- Go to Application/Storage tab → Cookies →
https://x.com - Copy these cookies:
auth_tokenct0
- Add to
.envfile (see format above)
Twitter API Credentials
For API operations (posting, API fallback):
- Go to Twitter Developer Portal
- Create an app or use existing
- Copy credentials:
- API Key (Consumer Key)
- API Secret (Consumer Secret)
- Access Token
- Access Secret
Proxy Credentials
For production web scraping:
- Sign up at Oxylabs or BrightData
- Get credentials: username and password
- Add to
.envfile
Operating Modes
The SDK supports three operating modes:
Hybrid Mode (Recommended)
Uses scraping first, API as fallback:
const client = new XTwitterClient({
mode: 'hybrid',
preferredMode: 'crawler' // Try crawler first
});Best for: Cost optimization with reliability
Crawler-Only Mode
Uses only web scraping:
const client = new XTwitterClient({
mode: 'crawler'
});Best for: Maximum cost savings, read-only operations
API-Only Mode
Uses only Twitter API:
const client = new XTwitterClient({
mode: 'api'
});Best for: Simplicity, when cost is not a concern
Next Steps
- Configuration Guide - Learn about all configuration options
- API Reference - Explore all available methods
- Webhook Notifications - Set up monitoring
- Quick Start - See more code examples
Need Help?
- API Documentation - Detailed API reference
- npm Package - View releases and downloads