I am trying to manipulate the csv-file from https://www.kaggle.com/raymondsunartio/6000-nasdaq-stocks-historical-daily-prices using dask.dataframe. The original dataframe has columns 'date', 'ticker', 'open', 'close', etc...
My goal is to create a new data frame with index 'date' and columns as the closing price of each unique ticker.
The following code does the trick, but is quite slow, using almost a minute for N = 6. I suspect that dask tries to read the CSV-file multiple times in the for-loop, but I don't know how I would go about making this faster. My initial guess is that using df.groupby('ticker') somewhere would help, but I am not familiar enough with pandas.
import dask.dataframe as dd
from functools import reduce
def load_and_fix_csv(path: str, N: int, tickers: list = None) -> dd.DataFrame:
raw = dd.read_csv(path, parse_dates=["date"])
if tickers is None:
tickers = raw.ticker.unique().compute()[:N] # Get unique tickers
dfs = []
for tick in tickers:
tmp = raw[raw.ticker == tick][["date", "close"]] # Temporary dataframe from specific ticker with columns date, close
dfs.append(tmp)
df = reduce(lambda x, y: dd.merge(x, y, how="outer", on="date"), dfs) # Merge all dataframes on date
df = df.set_index("date").compute()
return df
Every kind of help is appreciated! Thank you.