• The moderator of this forum is Austin.
  • Welcome to Smogon! Take a moment to read the Introduction to Smogon for a run-down on everything Smogon.

Any CSVs/.txts available for the strategydex?

Hi there,

Like the title says, I'm looking for raw text to make a personal database with columns like: 'Pokemon' 'Type 1' Type 2' 'Tier' etc. Is this available? If not, do I have permission to scrape the strategydex for these vectors?

The goal would be to conduct some sort of meta-analysis on the roster to provide myself and others with a deeper understanding of the metagame. I've been out of the loop for a bit - I missed out on gen 7. I plan on using excel/R/python, but any recommendations for analytical methods would be highly appreciated!

If anyone's done anything like this, please feel free to shoot some ideas or questions my way. I'm trying to better my understanding of statistics and data science through something I find fun.


P.S. I read through the other (sub)forums and this channel seemed like the best one for this kind of question. If it isn't, please let me know!
hmm. i think maybe i have made one.

i made a script that can pull data from smogon's usage stats, but it doesn't give what tier that mon is in (although you could infer it based on usage) i never really finished where i was going with it, but you can have a look. basically just iterate over the lists that are commented out, or use your own inputs if you want. not sure if it's very good code, but it works. sorry if this isn't what you're looking for.

alternatively, you could scrape the html, depending on how it enters from the strategy dex and make a json object out of the results, but judging by the naming conventions used in the strat dex, it looks like that might be difficult since it looks like they are generated by react. (but since they also have the same structure, just get the count of the divs in the page and make a for loop to work down each tree, and iterate).

import requests
import os
import shutil
import pandas as pd
import tier_lookup
import time

trees = ['chaos','leads','mega','metagame','monotype','moveset']
suspect = set to bool; true="suspecttest", false=none

class Contact_Smogon():

    def __init__(self,year,mm,gen,tier,rating):
        self.yyyy=str(year) #needs to be 4 digit year
        self.mm=str(mm) #needs to be 2 digit month, ie 03 for March

    #create temporary folder to store txt for parse
    def _make_temp(self):
        path = os.getcwd()
        folder_name = r'\temp_folder'
        global _temp_folder
        _temp_folder = path+folder_name
        print("Temporary folder successfully created.")
        return _temp_folder

    #remove temporary folder
    def _clear_temp(self):
        temp_path = os.fsencode(_temp_folder)
        print("Temporary files have been removed from {}.".format(_temp_folder))

    #find the stats for the tier of interest, will be parsed later
    def find_stats(self,urls):#,gen,tier,rating):
        # rating_list=['0','1500','1630','1760'] #list of possible ratings
        # if rating not in rating_list: #throw error if invalid rating.
        #     raise ValueError("Invalid rating input. Rating value must be 1 of ["
        #                      "0, 1500, 1630, 1760]")
        # #set class variables
        # self.gen = gen
        # self.tier=tier
        # self.rating=rating
        # #url to get perform the request
        # urls=d.generate_urls()

        #do the request. get the obj and the text. ave text to file and
        # begin edit process. may abstract away to outside function in future.
        for url in urls:

            page = r.text

            src = url.split('/')
            #save files based on unique values.
            #save file
            with open(page_path,"w") as f:

            #read file in with readlines(), remove first 5 lists of garbage
            # elements. begin text formatting.
            fobj = open(page_path)
            df = Contact_Smogon.create_data_structure(self,data_list=data_list)

            return df

    #Function to: create data structure
    def create_data_structure(self,data_list):
        #key pairs, mirroring the headers in the raw text

        #blank list to be filled with dicts

        #inserting the data into lists
        for data in range(len(data_list)):
            split_data = data_list[data].split(',')
            tmpdict = dict(zip(keys,split_data))

        #make dataframe to save and use for plotting
        df = pd.DataFrame(dict_list)

        return df

    # def clean_stats(self, page_path):
    #     with open(page_path,'w') as s:
    #         lines = s.readlines()
    #         for line in lines:
    #             print(line)
    #     return
        # headers=['rank','pokemon','usage_pct','raw_usage','raw_pct','real',
        #          'real_pct']

    def _remove_formatting(self,page):
        #create blank list to be filled and retunred

        #read lines of file with .readlines() and truncate the first 6 lines of
        # formatting
        listOfLines = page.readlines()
        listOfLines = listOfLines[5:]

        #does the formatting of the lists into csv format.
        for line in listOfLines:
            line = line.replace('|', ',')
            line = line.replace(' ', '')
            line = line.replace('%','')
            if line.startswith(','):

        return outlist #return the list

Users Who Are Viewing This Thread (Users: 1, Guests: 0)