🚀 Get 15% OFF with code TOPSCRAPE
Python
7 min read

Free Python Library for Proxy API Integration

Python code showing proxy integration with the free-proxy-server library
Python code showing proxy integration with the free-proxy-server library
Lucas
Proxy Expert

Free Python Library for Proxy API Integration

Our Python library connects you to thousands of free proxies with zero setup time.

Global Network

20+ countries, updated every minute

Lightning Fast

Async support with connection pooling

Smart Validation

Only working proxies reach your code

Zero Config

Works with requests, httpx, aiohttp

Quick Setup

Install the library and start using proxies in under a minute:

Installation

pip install free-proxy-server

View on PyPI

Basic Usage

from free_proxy_server import ProxyClient

# Get proxies instantly
client = ProxyClient()
proxies = client.get_proxies()

print(f"Found {len(proxies)} working proxies")

First Request

import requests

proxy = proxies[0]
response = requests.get(
    "https://httpbin.org/ip", 
    proxies=proxy.proxy_dict
)

print(f"Your new IP: {response.json()['origin']}")

Smart Filtering

Filter proxies by country, protocol, speed, and more. Get exactly what you need:

from free_proxy_server import ProxyClient, ProxyFilter

# Get fast US HTTP proxies
filters = ProxyFilter(
    country="US",           # USA only
    protocol="http",        # HTTP protocol
    max_timeout=500,        # Under 500ms
    working_only=True,      # Skip broken ones
    limit=20               # Top 20 fastest
)

fast_proxies = client.get_proxies(filters)
print(f"Found {len(fast_proxies)} fast proxies")

Available Filters

  • Countries: US, GB, DE, FR, NL, CA, BR, SE, PL, AU, IT, SG and more
  • Protocols: HTTP, HTTPS, SOCKS4, SOCKS5
  • Speed: Filter by timeout (100ms to 5000ms)
  • Status: Working only, or include all
  • Limit: Get exactly what you need

Library Integrations

Works seamlessly with all popular HTTP libraries:

Requests

import requests

proxy = client.get_proxies()[0]
response = requests.get(
    "https://api.github.com/users/octocat", 
    proxies=proxy.proxy_dict,
    timeout=10
)

user_data = response.json()
print(f"GitHub user: {user_data['name']}")

HTTPX

import httpx

proxy = client.get_proxies()[0]
with httpx.Client(proxies=proxy.url) as http_client:
    response = http_client.get("https://httpbin.org/ip")
    print(f"Your IP via proxy: {response.json()['origin']}")

aiohttp (Async)

import aiohttp

proxy = await async_client.get_proxies()
async with aiohttp.ClientSession() as session:
    async with session.get(
        "https://httpbin.org/json", 
        proxy=proxy[0].url
    ) as response:
        data = await response.json()
        print(f"Response: {data}")

Async Support

Handle multiple requests concurrently for better performance:

import asyncio
from free_proxy_server import AsyncProxyClient, ProxyFilter

async def get_multiple_country_proxies():
    async with AsyncProxyClient() as client:
        # Get US proxies
        us_filters = ProxyFilter(country="US", protocol="http")
        us_proxies = await client.get_proxies(us_filters)
        
        # Get proxies from multiple countries simultaneously
        country_proxies = await client.get_multiple_countries([
            "US", "GB", "DE", "FR", "CA"
        ])
        
        total_proxies = sum(len(proxies) for proxies in country_proxies)
        print(f"Got {total_proxies} proxies from 5 countries")
        
        return country_proxies

# Run it
proxies = asyncio.run(get_multiple_country_proxies())

Performance difference: Async requests are up to 10x faster than synchronous requests when handling multiple operations.

Proxy Validation

Test proxies before using them to avoid failures:

from free_proxy_server import ProxyValidator

# Test proxies before using them
validator = ProxyValidator(
    timeout=10,                           # 10 second timeout
    test_url="https://httpbin.org/ip"    # Test endpoint
)

# Synchronous validation
raw_proxies = client.get_proxies(limit=50)
working_proxies = validator.validate_proxies(raw_proxies)

print(f"{len(working_proxies)}/{len(raw_proxies)} proxies work")

# Asynchronous validation (faster)
async def validate_fast():
    working = await validator.validate_proxies_async(
        raw_proxies,
        max_concurrent=20  # Test 20 at once
    )
    return working

working_proxies = asyncio.run(validate_fast())

Tip: Always validate proxies before heavy scraping to save time and prevent errors.

Proxy Rotation

Avoid IP bans with automatic proxy rotation:

from free_proxy_server import ProxyRotator

# Create rotator with your proxy list
rotator = ProxyRotator(working_proxies)

# Scrape multiple pages with rotation
for i in range(100):
    # Get next proxy in rotation
    proxy = rotator.get_next()
    
    try:
        response = requests.get(
            f"https://example.com/page/{i}",
            proxies=proxy.proxy_dict,
            timeout=10
        )
        
        if response.status_code == 200:
            print(f"Page {i} scraped successfully")
        else:
            # Remove bad proxy and try another
            rotator.remove_proxy(proxy)
            proxy = rotator.get_random()
            
    except Exception as e:
        print(f"Error with proxy {proxy}: {e}")
        rotator.remove_proxy(proxy)

Rotation Methods

  • get_next() - Sequential rotation
  • get_random() - Random proxy selection
  • remove_proxy() - Remove failed proxies
  • add_proxy() - Add new working proxies

Error Handling

Handle failures gracefully with proper exception handling:

from free_proxy_server import (
    ProxyClient, 
    ProxyAPIError, 
    ProxyTimeoutError,
    ProxyValidationError
)

def robust_scraping():
    try:
        client = ProxyClient()
        proxies = client.get_proxies()
        
        for proxy in proxies:
            try:
                response = requests.get(
                    "https://example.com",
                    proxies=proxy.proxy_dict,
                    timeout=10
                )
                return response  # Success
                
            except requests.RequestException:
                continue  # Try next proxy
                
    except ProxyAPIError as e:
        print(f"API Error: {e.message} (Status: {e.status_code})")
        return None
        
    except ProxyTimeoutError as e:
        print(f"Timeout: {e}")
        return None
        
    except ProxyValidationError as e:
        print(f"Validation failed: {e}")
        return None

result = robust_scraping()

Export Formats

Export proxy data in multiple formats for different use cases:

from free_proxy_server import ProxyFormatter

# Simple address:port list
simple_list = ProxyFormatter.to_simple_list(proxies)
# ['1.2.3.4:8080', '5.6.7.8:3128', ...]

# Curl commands for testing
curl_commands = ProxyFormatter.to_curl_format(proxies)
# ['curl -x 1.2.3.4:8080', 'curl -x 5.6.7.8:3128', ...]

# CSV format for analysis
csv_data = ProxyFormatter.to_csv(proxies, include_headers=True)
# 'address,port,protocol,country,timeout_ms
1.2.3.4,8080,http,US,450
...'

# Requests library format
request_format = ProxyFormatter.to_requests_format(proxies)
# [{'http': 'http://1.2.3.4:8080', 'https': 'http://1.2.3.4:8080'}, ...]

Real-World Example

Here's a complete web scraping example:

import requests
from free_proxy_server import ProxyClient, ProxyFilter
import time

def scrape_with_proxies(urls):
    client = ProxyClient()
    
    # Get fast US proxies
    filters = ProxyFilter(
        country="US",
        protocol="http",
        max_timeout=500,
        working_only=True
    )
    
    proxies = client.get_proxies(filters)
    results = []
    
    for i, url in enumerate(urls):
        proxy = proxies[i % len(proxies)]  # Rotate proxies
        
        try:
            response = requests.get(
                url, 
                proxies=proxy.proxy_dict,
                timeout=10
            )
            results.append(response.text)
            
        except Exception as e:
            print(f"Error with {proxy}: {e}")
            continue
            
        time.sleep(1)  # Be nice to servers
    
    return results

urls = ["http://example.com", "http://httpbin.org/ip"]
data = scrape_with_proxies(urls)

Performance Tips

Make your proxy usage faster:

  • Use async clients for multiple requests
  • Filter proxies by speed (max_timeout)
  • Validate proxies before heavy use
  • Rotate proxies to avoid blocks
  • Cache working proxies locally

Available Proxy Types

Our API provides different proxy types:

  • HTTP proxies - Standard web traffic
  • HTTPS proxies - Secure connections
  • SOCKS4/5 proxies - TCP traffic

Most web scraping uses HTTP proxies. Use SOCKS for other protocols.

Countries Available

Get proxies from 20+ countries including:

  • United States (US)
  • Germany (DE)
  • United Kingdom (GB)
  • France (FR)
  • Netherlands (NL)
  • Canada (CA)

Free vs Premium

Our free API gives you working proxies at no cost. For production use, consider our premium datacenter and mobile proxies:

  • Higher speed and reliability
  • More locations
  • Better uptime
  • Dedicated support

Getting Started

Start using free proxies in your Python projects:

  1. Install: pip install free-proxy-server (PyPI)
  2. Import: from free_proxy_server import ProxyClient
  3. Get proxies: proxies = ProxyClient().get_proxies()
  4. Use in requests: requests.get(url, proxies=proxy.proxy_dict)

The library handles all API calls, filtering, and formatting. You focus on your scraping logic.

Need help? Check our API documentation or contact support. The library is open source and actively maintained.