Today, the support for files in XLS, XLSX and CSV format has been added to the Diggernaut platform. The way it was implemented is same as for other supported file types. You load a file into the digger using the walk command, the digger gets the file, determines its type and converts it to XML. Next, you can traverse the DOM structure, extract the necessary data and create your data set.
Let’s see how exactly it works by an example. To do it, we uploaded 3 files to our sandbox:
https://www.diggernaut.com/sandbox/sample.csv – CSV data file
https://www.diggernaut.com/sandbox/sample.xls – XLS data file (binary version)
https://www.diggernaut.com/sandbox/sample.xlsx – XLSX data file (XML version)
We’ll code a straightforward digger configuration that loads the file and show us the source code of the converted data in debug mode.
--- config: debug: 2 agent: Firefox do: - walk: to: https://www.diggernaut.com/sandbox/sample.csv do: If we run the digger in debug mode, then in the log we can see the following XML source page with data:
First Last Pcode Political Party
Smith Fred A Democratic
Robbins Terry 1 Green
O'Neill Susan B Republican
Parker Scott D American Independent
Perkins Ralph D American Independent
Talbot Angie 7 Middle Class Pty
Since there is only one sheet in CSV, we have a single sheet element in the resulting structure. In XLS / XLSX, there can be many sheets, and all of them are kept in the corresponding sheet elements. It’s quite easy to parse this structure, go through the sheets, then go through the rows row and extract the data from the columns column. The values in the classes correspond to the row and column number in the original file.
Let’s now see how the XLS resource will be converted:
--- config: debug: 2 agent: Firefox do: - walk: to: https://www.diggernaut.com/sandbox/sample.xls do: We get the following source code:
First Last Pcode Political Party
Smith Fred A
Robbins Terry 1
O'Neill Susan B
Parker Scott D
Perkins Ralph D
Talbot Angie 7
PARTY CODE NAME
1 Green
2 Reform
3 Whig
4 Islamic Political Party of America
5 Rock & Roll
6 Natural Law
7 Middle Class Pty
8 Humanist
9 Pragmatic
10 Conscious American African Party
11 Parliament Party
12 United Conscious Builders of the Dream Party
13 The Egalitarian Party
14 The Humanitarian Party
15 Scientifically Evolving University Party
16 God, Truth & Love Party
17 Superhappy Party
18 Working Families Party
A Democratic
B Republican
C Decline to State
D American Independent
E Citizen Party
F Communist
G Conservative
H Environmentalist
I Ind. Progressive
J Liberal
K Peace & Freedom
L Prohibition
M New Economy
N Socialist
O Socialist Labor
P Pot Party
Q Libertarian
R Amer. Natl. Socialist
S Poor People’s Party
T Free
U National
V Constitution Party
W Vision
X Puritan
Y Federal
Z Misc.
As you can see, in this file we have 2 sheets, and the rest is the same structure as in the case of CSV. If we load XLSX, we get precisely the same result as with XLS, so we omit this test.
How can you use this functionality, except for the actual parsing of the final data? Alternatively, you can use spreadsheets as a feed with the resources your digger should scrape. For example, you add a list of links to products in the store to the sheet. Your scraper reads the sheet, picks up the list of URLs, puts them into the pool, and then the main logic of the scraper is used to collect the data about the goods. Alternatively, imagine that you have a spreadsheet with data that must be extended with the data from the web. Your scraper reads the sheet, go through it line by line and form a new dataset, for each line it can visit some page and extract some additional information to keep in the new dataset. This way you will have data from the spreadsheet and the product page merged in a single entry. There are other options for using the spreadsheets, but we can talk about it next time.