Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

How can I export a full Scryfall search to a CSV compatible with Moxfield collections?

+1
−0

I’m trying to export cards from a Scryfall search so I can import them into a Moxfield collection.

I’ve tried a simple script that dumps card names: Query Scryfall + dump card names for Moxfield import …but it only outputs card names, and Moxfield gives an error when I try to import it.

According to Moxfield’s help page, the CSV must have a header row with these columns:

Count,Tradelist Count,Name,Edition,Condition,Language,Foil,Tags,Last Modified,Collector Number,Alter,Proxy,Purchase Price

I then tried fetching Scryfall’s CSV output directly with this Python script:

import requests

# Your Scryfall search query
QUERY = "f:standard f:penny usd<=1"
URL = f"https://api.scryfall.com/cards/search?q={QUERY}&format=csv"

# Fetch the CSV from Scryfall
response = requests.get(URL)
response.raise_for_status()  # raise error if request fails

# Save to a file
with open("scryfall_output.csv", "w", encoding="utf-8") as f:
    f.write(response.text)

print("Saved Scryfall CSV to scryfall_output.csv")

This seems to get cards by name but only includes cards starting with letters A, B, or C. The blog post about this feature: Scryfall API CSV format suggests it should export the full search.

My questions:

  1. Do I need to handle pagination manually to get the full Scryfall search before converting it to the Moxfield CSV format?
  2. Are there simpler tools, scripts, or methods for exporting a complete Scryfall search into a CSV that matches Moxfield’s required header format?

I’m looking for a reliable workflow to go from a Scryfall search query to a fully importable Moxfield collection CSV.

History

0 comment threads

2 answers

+3
−0

The documentation for /cards/search says

This method is paginated, returning 175 cards at a time. Review the documentation for paginating the List type and and the Error type type to understand all of the possible output from this method.

So yes, you would have to handle pagination yourself.

Test the has_more property, and if it is true, append to your data with the URL at next_page.

History

0 comment threads

+1
−0

✅ This will create a fully Moxfield-compatible CSV with all cards from a Scryfall search.

import requests
import csv
import time

QUERY = "f:standard f:penny usd<=1"
BASE_URL = "https://api.scryfall.com/cards/search"
PARAMS = {
    "q": QUERY,
    "unique": "cards",
    "format": "json"
}

OUTPUT_FILE = "moxfield_import.csv"

FIELDNAMES = [
    "Count",
    "Tradelist Count",
    "Name",
    "Edition",
    "Condition",
    "Language",
    "Foil",
    "Tags",
    "Last Modified",
    "Collector Number",
    "Alter",
    "Proxy",
    "Purchase Price"
]

def fetch_all_cards():
    url = BASE_URL
    params = PARAMS.copy()
    while True:
        resp = requests.get(url, params=params)
        resp.raise_for_status()
        data = resp.json()
        for card in data.get("data", []):
            yield card
        if not data.get("has_more"):
            break
        url = data["next_page"]
        params = None
        time.sleep(0.2)

def write_cards_to_csv(filename):
    with open(filename, "w", newline="", encoding="utf-8") as f:
        writer = csv.DictWriter(f, fieldnames=FIELDNAMES)
        writer.writeheader()
        for card in fetch_all_cards():
            row = {
                "Count": 1,
                "Tradelist Count": "",
                "Name": card.get("name"),
                "Edition": card.get("set"),
                "Condition": "",
                "Language": card.get("lang"),
                "Foil": "Yes" if card.get("foil") else "No",
                "Tags": "",
                "Last Modified": "",
                "Collector Number": card.get("collector_number"),
                "Alter": "",
                "Proxy": "",
                "Purchase Price": ""
            }
            writer.writerow(row)

if __name__ == "__main__":
    write_cards_to_csv(OUTPUT_FILE)
    print(f"Saved all cards to {OUTPUT_FILE}")
History

0 comment threads

Sign up to answer this question »