Settings

Theme

Loading Data into Pandas: Tips and Tricks You May or May Not Know

dataground.io

49 points by spacejunkjim 3 years ago · 11 comments

Reader

Helmut10001 3 years ago

Not to taunt about the article, but the most important pandas parameters to me are `iterator=True` and `chunksize=x`, for streamed processing. Here's an example for processing a CSV file with 400 Million latitude and longitude coordinates.[1]

[1]: https://ad.vgiscience.org/twitter-global-preview/00_Twitter_...

nl 3 years ago

I find pd.read_sql pretty useful for integration with SQLLite too.

https://pandas.pydata.org/docs/reference/api/pandas.read_sql...

wenc 3 years ago

I’ve mostly replaced pd.read_csv and pd.read_parquet with duckdb.query(“select * from ‘x.csv’) or duckdb.query(“select * from ‘y/*.parquet’).

It’s much faster because DuckDB is vectorized. The result is a Pandas dataframe.

Querying the Pandas dataframe from DuckDB is faster than querying it with Pandas itself.

ashwal 3 years ago

Some good tips in here, I've find myself reaching for JSON/excel methods often.

Despite using it for years, I still haven't decided if pandas is poorly architected or if the clunkiness (for lack of better of term) is a result of the inherent difficulty of the tasks.

  • hansvm 3 years ago

    I seem to remember the author awhile back writing about how it was their first major project and there were a ton of things they'd learned and would like to change.

char101 3 years ago

Another tip: use https://github.com/sfu-db/connector-x to load database query result to pandas without memory copy resulting in faster operation.

bushbaba 3 years ago

I wish I knew about json_normalize sooner. JSON is great, but needing to transform it to a CSV/Excel sheet is a toil. Great to know about this one-liner!!

lettergram 3 years ago

I spent some time working on something called DataProfiler python library

https://github.com/capitalone/DataProfiler

The gist is that you can point to any common dataset and load it directly into pandas.

from dataprofiler import Data

data = Data("your_file.csv") # Auto-Detect & Load: CSV, AVRO, Parquet, JSON, Text, URL

I simply hate dealing with loading data, so it's my go-to.

pineapplejuice 3 years ago

My tip is to keep a dict of all the fields and the data types you expect them to be, particularly strings. In my company we have IDs that start with zeros, or are a mix of numbers and letters, and get interpreted as numeric types. I'm frequently pulling data out of the DW with the same fields, so I just have to use the dtype= arg point it to my dict and it takes care of that for me.

IntrepidWorm 3 years ago

In my experience, the best way to load data into pandas is https://www.atlassian.com/software/bamboo

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection