You are browsing the archive for HowTo.

Exploratory Data Analysis – A Short Example Using World Bank Indicator Data

Tony Hirst - July 7, 2013 in Data Stories, HowTo

Knowing how to get started with an exploratory data analysis can often be one of the biggest stumbling blocks if a data set is new to you, or you are new to working with data. I recently came across a powerful example from Al Essa/@malpaso where he illustrates one way in to exploring a new data set – explaining a set of apparent outliers in the data. (Outliers are points that are atypical compared to the rest of data, in this example by virtue of taking on extreme values compared to other data points collected at the same time.)

The case refers to an investigation of life expectancy data obtained from the World Bank (World Bank data sets: life expectancy at birth*), and how Al tried to find what might have caused an apparent crash in life expectancy in Rwanda during the 1990s: The Rwandan Tragedy: Data Analysis with 7 Lines of Simple Python Code

*if you want to download the data yourself, you will need to go into the Databank page for the indicator, then make an Advanced Selection on the Time dimension to select additional years of data.

world bank data

The environment that Al uses to analyse the data in the case study is iPython Notebook, an interactive environment for editing Python code within the browser. (You can download the necessary iPython application from here (I installed the Anaconda package to try it), and then followed the iPython Notebook instructions here to get it running. It’s all a bit fiddly, and could do with a simpler install and start routine, but if you follow the instructions it should work okay…)

Ipython notebook

iPython is not the only environment that supports this sort of exploratory data analysis, of course. For example, we can do a similar analysis using the statistical programming language R, and the ggplot2 graphics library to help with the chart plotting. To get the data, I used a special R library called to WDI that provides a convenient way of interrogating the World Bank Indicators API from within R, and makes it easy to download data from the API directly.

I have posted an example of the case study using R, and the WDI library, here: Rwandan Tragedy (R version). The report was generated form a single file written using a markup language called R markdown in the RStudio environment. R markdown provides a really powerful workflow for creating “reproducible reports” that combine analysis scripts with interpretive text (RStudio – Using Markdown). You can find the actual R markdown script used to generate the Rwanda Tragedy report here.

As you have seen, exploratory data analysis can be thought of as having a conversation with data, asking it questions based on what answers it has previously told you, or based on hypotheses you have made using other sources of information or knowledge. If exploratory data analysis is new to you, try walking through the investigation using either iPython or R, and then see if you can take it further… If you do, be sure to let us know how you got on via the comments:-)

Flattr this!

Get Started With Scraping – Extracting Simple Tables from PDF Documents

Tony Hirst - June 18, 2013 in Scraping

As anyone who has tried working with “real world” data releases will know, sometimes the only place you can find a particular dataset is as a table locked up in a PDF document, whether embedded in the flow of a document, included as an appendix, or representing a printout from a spreadsheet. Sometimes it can be possible to copy and paste the data out of the table by hand, although for multi-page documents this can be something of a chore. At other times, copy-and-pasting may result in something of a jumbled mess. Whilst there are several applications available that claim to offer reliable table extraction services (some free software,so some open source software, some commercial software), it can be instructive to “View Source” on the PDF document itself to see what might be involved in scraping data from it.

In this post, we’ll look at a simple PDF document to get a feel for what’s involved with scraping a well-behaved table from it. Whilst this won’t turn you into a virtuoso scraper of PDFs, it should give you a few hints about how to get started. If you don’t count yourself as a programmer, it may be worth reading through this tutorial anyway! If nothing else, it may give a feel for the sorts of the thing that are possible when it comes to extracting data from a PDF document.

The computer language I’ll be using to scrape the documents is the Python programming language. If you don’t class yourself as a programmer, don’t worry – you can go a long way copying and pasting other people’s code and then just changing some of the decipherable numbers and letters!

So let’s begin, with a look at a PDF I came across during the recent School of Data data expedition on mapping the garment factories. Much of the source data used in that expedition came via a set of PDF documents detailing the supplier lists of various garment retailers. The image I’ve grabbed below shows one such list, from Varner-Gruppen.

SUpplier list

If we look at the table (and looking at the PDF can be a good place to start!) we see that the table is a regular one, with a set of columns separated by white space, and rows that for the majority of cases occupy just a single line.

SUpplier list detail

I’m not sure what the “proper” way of scraping the tabular data from this document is, but here’s the sort approach I’ve arrived at from a combination of copying things I’ve seen, and bit of my own problem solving.

The environment I’ll use to write the scraper is Scraperwiki. Scraperwiki is undergoing something of a relaunch at the moment, so the screenshots may differ a little from what’s there now, but the code should be the same once you get started. To be able to copy – and save – your own scrapers, you’ll need an account; but it’s free, for the moment (though there is likely to soon be a limit on the number of free scrapers you can run…) so there’s no reason not to…;-)

Once you create a new scraper:

scraperwiki create new scraper

you’ll be presented with an editor window, where you can write your scraper code (don’t panic!), along with a status area at the bottom of the screen. This area is used to display log messages when you run your scraper, as well as updates about the pages you’re hoping to scrape that you’ve loaded into the scraper from elsewhere on the web, and details of any data you have popped into the small SQLite database that is associated with the scraper (really, DON’T PANIC!…)

Give your scraper a name, and save it…

blank scraper

To start with, we need to load a couple of programme libraries into the scraper. These libraries provide a lot of the programming tools that do a lot of the heavy lifting for us, and hide much of the nastiness of working with the raw PDF document data.

import scraperwiki
import urllib2, lxml.etree

No, I don’t really know everything these libraries can do either, although I do know where to find the documentation for them… lxm.etree, scraperwiki! (You can also download and run the scraperwiki library in your own Python programmes outside of scraperwiki.com.)

To load the target PDF document into the scraper, we need to tell the scraper where to find it. In this case, the web address/URL of the document is http://cdn.varner.eu/cdn-1ce36b6442a6146/Global/Varner/CSR/Downloads_CSR/Fabrikklister_VarnerGruppen_2013.pdf, so that’s exactly what we’ll use:

url = 'http://cdn.varner.eu/cdn-1ce36b6442a6146/Global/Varner/CSR/Downloads_CSR/Fabrikklister_VarnerGruppen_2013.pdf'

The following three lines will load the file in to the scraper, “parse” the data into an XML document format, which represents the whole PDF in a way that resembles an HTML page (sort of), and then provides us with a link to the “root” of that document.

pdfdata = urllib2.urlopen(url).read()
xmldata = scraperwiki.pdftoxml(pdfdata)
root = lxml.etree.fromstring(xmldata)

If you run this bit of code, you’ll see the PDF document gets loaded in:

Scraperwiki page loaded in

Here’s an example of what some of the XML from the PDF we’ve just loaded looks like preview it:

print etree.tostring(root, pretty_print=True)

PDF as XML preview

We can see how many pages there are in the document using the following command:

pages = list(root)
print "There are",len(pages),"pages"

The scraperwiki.pdftoxml library I’m using converts each line of the PDF document to a separate grouped elements. We can iterate through each page, and each element within each page, using the following nested loop:

for page in pages:
  for el in page:

We can take a peak inside the elements using the following print statement within that nested loop:

if el.tag == "text":
  print el.text, el.attrib

Previewing the XML element contents

Here’s the sort of thing we see from one of the table pages (the actual document has a cover page followed by several tabulated data pages):

Bangladesh {'font': '3', 'width': '62', 'top': '289', 'height': '17', 'left': '73'}
Cutting Edge {'font': '3', 'width': '71', 'top': '289', 'height': '17', 'left': '160'}
1612, South Salna, Salna Bazar {'font': '3', 'width': '165', 'top': '289', 'height': '17', 'left': '425'}
Gazipur {'font': '3', 'width': '44', 'top': '289', 'height': '17', 'left': '907'}
Dhaka Division {'font': '3', 'width': '85', 'top': '289', 'height': '17', 'left': '1059'}
Bangladesh {'font': '3', 'width': '62', 'top': '311', 'height': '17', 'left': '73'}

Looking again the output from each row of the table, we see that there are regular position indicators, particulalry the “top” and “left” coordinates, which correspond to the co-ordinates of where the registration point of each block of text should be placed on the page.

If we imagine the PDF table marked up as follows, we might be able to add some of the co-ordinate values as follows – the blue lines correspond to co-ordinates extracted from the document:

imaginary table lines

We can now construct a small default reasoning hierarchy that describes the contents of each row based on the horizontal (“x-axis”, or “left” co-ordinate) value. For convenience, we pick values that offer a clear separation between the x-co-ordinates defined in the document. In the diagram above, the red lines mark the threshold values I have used to distinguish one column from another:

if int(el.attrib['left']) < 100: print 'Country:', el.text,
elif int(el.attrib['left']) < 250: print 'Factory name:', el.text,
elif int(el.attrib['left']) < 500: print 'Address:', el.text,
elif int(el.attrib['left']) < 1000: print 'City:', el.text,
else:
  print 'Region:', el.text

Take a deep breath and try to follow the logic of it. Hopefully you can see how this works…? The data rows are ordered, stepping through each cell in the table (working left right) for each table row in turn. The repeated if-else statement tries to find the leftmost column into which a text value might fall, based on the value of its “left” attribute. When we find the value of the rightmost column, we print out the data associated with each column in that row.

We’re now in a position to look at running a proper test scrape, but let’s optimise the code slightly first: we know that the data table starts on the second page of the PDF document, so we can ignore the first page when we loop through the pages. As with many programming languages, Python tends to start counting with a 0; to loop through the second page to the final page in the document, we can use this revised loop statement:

for page in pages[1:]:

Here, pages describes a list element with N items, which we can describe explicitly as pages[0:N-1]. Python list indexing counts the first item in the list as item zero, so [1:] defines the sublist from the second item in the list (which has the index value 1 given that we start counting at zero) to the end of the list.

Rather than just printing out the data, what we really want to do is grab hold of it, a row at a time, and add it to a database.

We can use a simple data structure to model each row in a way that identifies which data element was in which column. We initiate this data element in the first cell of a row, and print it out in the last. Here’s some code to do that:

for page in pages[1:]:
  for el in page:
    if el.tag == "text":
      if int(el.attrib['left']) < 100: data = { 'Country': el.text }
      elif int(el.attrib['left']) < 250: data['Factory name'] = el.text
      elif int(el.attrib['left']) < 500: data['Address'] = el.text
      elif int(el.attrib['left']) < 1000: data['City'] = el.text
      else:
        data['Region'] = el.text
        print data

And here’s the sort of thing we get if we run it:

starting to get structured data

That looks nearly there, doesn’t it, although if you peer closely you may notice that sometimes we catch a header row. There are a couple of ways we might be able to ignore the elements in the first, header row of the table on each page.

  • We could keep track of the “top” co-ordinate value and ignore the header line based on the value of this attribute.
  • We could tack a hacky lazy way out and explicitly ignore any text value that is one of the column header values.

The first is rather more elegant, and would also allow us to automatically label each column and retain it’s semantics, rather than explicitly labelling the columns using out own labels. (Can you see how? If we know we are in the title row based on the “top” co-ordinate value, we can associate the column headings with the “left” coordinate value.) The second approach is a bit more of a blunt instrument, but it does the job…

skiplist=['COUNTRY','FACTORY NAME','ADDRESS','CITY','REGION']
for page in pages[1:]:
  for el in page:
    if el.tag == "text" and el.text not in skiplist:
      if int(el.attrib['left']) < 100: data = { 'Country': el.text }
      elif int(el.attrib['left']) < 250: data['Factory name'] = el.text
      elif int(el.attrib['left']) < 500: data['Address'] = el.text
      elif int(el.attrib['left']) < 1000: data['City'] = el.text
      else:
        data['Region'] = el.text
        print data

At the end of the day, it’s the data we’re after and the aim is not necessarily to produce a reusable, general solution – expedient means occasionally win out! As ever, we have to decide for ourselves the point at which we stop trying to automate everything and consider whether it makes more sense to hard code our observations rather than trying to write scripts to automate or generalise them.

http://xkcd.com/974/ - The General Problem

The final step is to add the data to a database. For example, instead of printing out each data row, we could add the data to the a scraper database table using the command:

scraperwiki.sqlite.save(unique_keys=[], table_name='fabvarn', data=data)

Scraped data preview

Note that the repeated database accesses can slow Scraperwiki down somewhat, so instead we might choose to build up a list of data records, one per row, for each page and them and then add all the companies scraped from a page one page at a time.

If we need to remove a database table, this utility function may help – call it using the name of the table you want to clear…

def dropper(table):
  if table!='':
    try: scraperwiki.sqlite.execute('drop table "'+table+'"')
    except: pass

Here’s another handy utility routine I found somewhere a long time ago (I’ve lost the original reference?) that “flattens” the marked up elements and just returns the textual content of them:

def gettext_with_bi_tags(el):
  res = [ ]
  if el.text:
    res.append(el.text)
  for lel in el:
    res.append("<%s>" % lel.tag)
    res.append(gettext_with_bi_tags(lel))
    res.append("</%s>" % lel.tag)
    if el.tail:
      res.append(el.tail)
  return "".join(res).strip()

If we pass this function something like the string <em>Some text<em> or <em>Some <strong>text</strong></em> it will return Some text.

Having saved the data to the scraper database, we can download it or access it via a SQL API from the scraper homepage:

scrpaed data - db

You can find a copy of the scraper here and a copy of various stages of the code development here.

Finally, it is worth noting that there is a small number of “badly behaved” data rows that split over more than one table row on the PDF.

broken scraper row

Whilst we can handle these within the scraper script, the effort of creating the exception handlers sometimes exceeds the pain associated with identifying the broken rows and fixing the data associated with them by hand.

Summary

This tutorial has shown one way of writing a simple scraper for extracting tabular data from a simply structured PDF document. In much the same way as a sculptor may lock on to a particular idea when working a piece of stone, a scraper writer may find that they lock in to a particular way of parsing data out of a data, and develop a particular set of abstractions and exception handlers as a result. Writing scrapers can be infuriating at times, but may also prove very rewarding in the way that solving any puzzle can be. Compared to copying and pasting data from a PDF by hand, it may also be time well spent!

It is also worth remembering that sometimes it can be quicker to write a scraper that does most of the job, and then finish off the data cleansing or exception handling using another tool, such as OpenRefine or even just a simple text editor. On occasion, it may also make sense to throw the data into a database table as quickly as you can, and then develop code to manage a second pass that takes the raw data out of the database, tidies it up, and then writes it in a cleaner or more structured form into another database table.

The images used in this post are available via a flickr set: ScoDa-Scraping-SimplePDFtable

Flattr this!

Analysing UK Lobbying Data Using OpenRefine

Tony Hirst - June 4, 2013 in Data Cleaning, OpenRefine

Being able to spot when we might might be able to turn documents into datasets is a useful skill for any data journalist or watchdog to develop, In this (rather long!) practical walkthrough post, we’ll see how we can start to use OpenRefine to turn a set text based forms into data. Specifically, given a piece of text that details the benefits awarded to a political group from a set of possible lobbying interests, how can we pull out the names of some of the lobbiests involved, along with some of the financial amounts they have donated, and turn it into data we can start to work with?

For example, in a piece of text that has the form:

Lobby Group X (a not-for-profit organisation) acts as the group’s secretariat. Company A paid £3200 towards the cost of a seminar held in May 2012 and Another Company paid £1200 towards the cost of a reception held in January 2013 (registered April 2013).

how could we pull out the name of the organisation providing secretariat support, or the names of the financial benefactors, the amounts they provided and the reason for the donation? In this post, we’ll see how we can use OpenRefine to start pulling out some of this information and put it into a form we can use to start looking for connected or significant interests.

The screenshots used to illustrate this post can be found in this photo set: School of Data – Open Refine – All Party Groups

The context for this investigation is a minor political scandal that broke in the UK over the last few days (Patrick Mercer resignation puts spotlight on lobbying). In part, the event put the spotlight onto a set of political groupings known as All-Party Groups, informal cross-party or subject or topic specific interest groups made up from members of both houses of the UK parliament (see for example the list of All Party Groups).

Many All Party Groups are supported by lobby groups, or other organisations with a shared interest. Support may take the form of providing secretariat services, making financial donations to support the group’s activities, or covering the costs of associated travel and accommodation expenses. As such, there has always been the risk that the groups might play a role in lobbying scandal (for example, APPGs – the next Westminster scandal?).

The UK Parliament website publishes transparency information that details the officers of the group and the names of twenty founding members (though the group may have many more members), along with disclosures about benefits received by the group.

The information is published as a set of web pages (one per group) as well as via a single PDF document.

example EPG page

A newly formed website, Allparty, has started teasing out some of this data, including trying to break out some of the benefits information and organising groups according to meaningful subject tags, although as yet there is no API (that no, no programmable way) of accessing or querying their datasets. A scraper on scraperwiki – David Jones’ All Party Groups – makes the raw data for each group available, as well as detailing group membership for each MP or Lord.

APG scraper on scraperwiki

The Scraperwiki API allows us to interrogate this data using structured queries of the described in the School of Data post Asking Questions of Data – Some Simple One-Liners. However, it does not break down the benefits information into a more structured form.

So how might we start to pull this data out? From the download link on the Scraperwiki scraper, we can get a link to a CSV file containing the data about the All Party Groups. We can use this link as a data source for a new OpenRefine project:

openrefine import data csv

If the CSV format isn’t detected automatically, we can configure the import directly before we create the project.

openrefine csv import config

Having got the data in, let’s start by trying to identify the groups that provide secretariat support. If we don’t get them all, it doesn’t matter so much for now. The aim is to get enough data to let us start looking for patterns.

We can see which rows have a benefit relating to secretariat support by filtering the Benefits column.

open refine text filter

If we look at some of the descriptions, we see there is a whole host of variations on theme…

how many ways of providing secretariat

So how might we go about creating a column that contains the name of the organisation providing secretariat support?

open refine add column based on column

“Parsing” Data using GREL Replace Expressions

When it comes to defining the contents of the new column, we might notice that the Benefits descriptions often start with the name of the organisation providing secretariat services. We could use the GREL expression language to simplify the text by spotting certain key phrases and then deleting content from there on. For example, in the sentence:

Confederation of Forest Industries (a not-for-profit organisation) acts as the groups secretariat.

we could just delete ” acts as the groups secretariat.”. The GREL expression value.replace(/ acts as the groups secretariat./,'') replaces the specified phrase with nothing (that is, an empty string). (Note that the “.” represents any single character, not just a full stop.) By recognising patterns in the way Benefits paragraphs are structured, we can start to come up with a crude way of parsing out who provides the secretariat.

openrefine secretariat

This leaves us with a bit of a mess if there is text following the deleted phrase, so we can instead delete any number of characters (.) following the phrase using value.replace(/ acts as the groups secretariat./,'').

openrefine secretariat 2

We also notice that there are other constructions that we need to account for… We can start to knock these out in a similar way by adding additional replace elements, as this construction shows:

openrefine secretariat 3

That is, value.replace(/ act as the groups secretariat./,'').replace(/ provides secretariat./,'')

If we look through some of the paragraphs that aren’t tidied up, we notice in some cases there are constructions that are almost, but don’t quite, match constructions we have already handled. For example, compare these two:

acts as the groups secretariat.
to act as the groups secretariat.

We can tweak the replace expression to at least identify either “act” or “acts” by telling it to look for the word “act” optionally followed by the character s (s?), that is, value.replace(/ acts? as the groups secretariat./,'').replace(/ provides secretariat./,'')

openrefine secretariat 4

Let’s stick with this for now, and create the column, Secreatariat, looking at the unique values using a text facet. If we sort by count we see that we have already started to identify some groups that support several of the groups.

starting to explore the secretariat

If we look through the other values, we see there is still quite a bit of work to be done.

tidying work to be done

If we go back to the Benefits column and create another new column based on it, Secretariat v2, we can reuse the replace expression we started to build up previously and work on a new improved secretariat column.

openrefine grel reuse

Alternatively, we can rethink our replace strategy by looking for key elements of the construction. There are various constructions based on “act” or “acts” or “provide” for example, so we can try to knock those out in a rather more aggressive way, also getting rid of any ” to” statement at the end of the phrase:

value.replace(/ act.secretariat./,'').replace(/ provide.secretariat./,'').replace(/ to/,'')

simplified replace

Looking through some of the phrases we are left with, there is a noticeable number of the form Company A (a consultancy) is paid by its client, Company B, to which are derived from phrases such as Company A (a consultancy) is paid by its client, Company B, to act as the groups secretariat.

We could create a second column from the Secretariat v2 column that contains this information. The first thing we’d need to do is identify who’s paying:

value.replace(/.* is paid by its clients?,? /,'')

We can tighten this up a little by splitting it into two parts to cope with the optional “its clients?” statement:

value.replace(/.* is paid by /,'').replace(/its clients?,? /,'')

The first part of this command gets rid of everything up to (.*) and including is paid by; the second part deletes its client with an optional s (s?) followed by an optional comma and then a space.

To only copy third party funders names across to the new column, we test to make sure that the replace was triggered, by testing to see if the proposed new column entry is different to the contents of the original cell, adding a further replace to tidy up any trailing commas

if(value.replace(/.* is paid by /,'').replace(/its clients?,? /,'')!=value,value.replace(/.* is paid by /,'').replace(/its clients?,? /,''),'').replace(/,$/,'')

secretariat funders

(I wonder if there is a tidier way of doing this?)

Let’s now create another column, Secretariat v3, based on Secretariat v2 that gets rid of the funders:

value.replace(/,? is paid by.*/,'')

delete funders

If we generate a text facet on this new column, we can display all the rows that have something set in this column:

open refine non-blank cells

We also notice from the text facet counts that some funders appear to support multiple groups – we can inspect these directly:

problem - dupe data

Hmm… some of the groups have similar looking names – are these the results of a name change, perhaps, leaving stale data in the original dataset – or are they really different groups? Such is the way of working with data that has now been obtained directly from a database! There’s often always more tidying to do!

If we look to see if there are any groups that appear to offer a lot of secretariat support, we notice one in particular – Policy Connect:

popular secretariat

We can also see that many of the so-supported groups have Barry Sheerman as a contact, and a couple declare Nik Dakin; at least two of them employ the same person (John Arnold).

The point to make here is not so much that there may be any undue influence, just that this sort of commonality may go unnoticed when scattered across multiple documents in an effectively unstructured way.

Whilst there is still work that could be done to further tidy the data set (for example, pulling out “on behalf of” relationships as well as “paid by” relationships), we have got enough data to start asking some “opening” structured questions, such as: are there companies that provide secretariat services for more than one group (answer: Yes, though we need to check fo duplicate group names…); what companies or groups fund other parties to provide secretariat services? (Note we may only have partial data on this so far, but at least we have some to start working with, and by inspecting the data we can see how we may need to clean it further).

For example, here is a copy of a CSV file containing the data as tidied using the above recipe, and here is the OpenRefine project file. We can load the CSV file into a Google spreadsheet and start to interrogate it using the approach described in the School of Data post Asking Questions of Data – Garment Factories Data Expedition.

“Parsing” Data using Jython (Python) Regular Expressions
In the previous section we saw how we could use GREL expressions to start parsing out data from a text paragraph. In this section, we’ll see how we can use Jython (that is, Python) for a similar purpose.

To start with, here’s a quick example of how to pull out the names of groups providing secretariat services, as we did using the GREL script:

import re
tmp=value
tmp=re.sub(r'(.) ( to )?(provide|act)+[s]?.secretariat.',r'\1',tmp)
tmp=re.sub(r'(.
) (is paid|on behalf of).*',r'\1',tmp)
if value==tmp:tmp=''
return tmp

Jython secretariat

The first line (import re) loads in the required regular expression library. For convenience, we assign the contents of the cell to a tmp variable, then we look for a string that has the following structure:

  • .* – any number of characters
  • ( to )? – followed by a space character and optionally the word to
  • (provide|act)+[s]? – followed by either the word provide or act with an optional s
  • .*secretariat.* – followed by any number of characters, then the word secretariat then any number of characters.

The re.sub() function has the form re.sub( r'pattern to match',r'result of this expression',STRING_INPUT). In the script above, if the pattern is matched, tmp is set to the value of whatever is contained in the first pair of brackets of the expression being matched (\1).

We can also use regular expressions to pull out the funding groups (Python regular expressions – documentation). So for example, do the initial configuration:

import re
tmp=value

In the following expression, we take whatever matches in the third set of brackets. This broadly matches patterns of the form “X is paid by Y to provide the secretariat” and allows us to extract Y.

tmp = re.sub(r'(.) (is paid by|on behalf of)(.) to (provide|act).secretariat.',r'\3', tmp)

This results in some strings of the form: “its client, Transplant 2013,” which we can tidy as follows:
tmp = re.sub(r'its client[s,]+ (.*)[,]+$',r'\1', tmp)

There are some constructions that are not captured, eg that take the form “X provides secretariat support on behalf of Y.” The following structure grabs Y out for these cases.

tmp = re.sub(r'.secretariat.(on behalf of )([^.]).',r'\2', tmp)

If we haven’t matched any payers (so the tmp string is unchanged), return a blank

if value==tmp: tmp=''
return tmp

Using Jython to grab funder names

Here’s how the different techniques compare:

Comparing performance

Remember that what we’re trying to do is structure the data so that we can start to run queries on it. Looking at the benefits, we notice that some groups have received financial support from different groups. For example, the All-Party Parliamentary Gas Safety Group has declared the following benefits:

Policy Connect (a not-for-profit organisation) provides secretariat services to the group. 3000 from the Council of Gas Detection and Environmental Monitoring; 7500 from the Energy Network Association; 6000 from Energy UK; 9000 from the Gas Industry Safety Group; 10,000 from the Gas Industry Safety Group (registered October 2011).

One of the things we might want to do in this case is pull out the amounts awarded as well as the organisations making the donation. Then we can start to count up the minimum benefits received (if we miss some of the data for some reason, we won’t have the total record!) as well as the organisations financially supporting each APG.

If we look at the benefits, we see they tend to be described in a common way – N,NNN from the Company X; MMMM from the Charity Y. Here’s one way we might start pulling out the data:

import re
tmp=value
m=re.findall(r'\d[,0-9]+ from the [^;.]*',tmp)
return m

*Note: it may be better to omit the the from that pattern… For example, our expression will miss “5000 from Bill & Melinda Gates Foundation (registered July 2012)”.

Group payments

A good example to test would be the one from the All Parliamentary Group on Global Health, which also includes constructions such as 5000 from the University of Manchester and 3000 from the Lancet (registered August 2012), which the expression above will identify as a single item, and 5000 from each of the following: Imperial College, Kings Health Partners, London School of Hygiene and Tropical Medicine, Cambridge University Health Partners (registered May 2012). which (if we fix the the problem would confuse it thoroughly!

If we return the items from the list as a single string, we can then split this list across multiple rows – change the last line to return '::'.join(m):

Joined items

With the data in this form, we can generate multiple rows for each group, with one record per payment. Let’s join the separate items from the list using an easy to identify separator (::):

Split multi cells

Using a similar approach to one we used before, use a text facet to only show rows with a value in the Group Payments column:

filter on non-blank again

We can then fill values down on required columns to regenerate complete rows:

fill down a column

Note that if filled down the columns on the whole dataset, we would incorrectly start filling values in on rows that were not generated from splitting the payments column across multiple rows.

We can also create two new columns from the group payments column, one identifying the amount the other the name of the organisation that made the payment. Let’s go back to GREL to do this – first let’s pull out the amounts, and then remove any commas from the amount so all we’re left with is a number – value.replace(/ from the.*/,'').replace(',','')

payment extract

Let’s naively try to pull out the organisations responsible – value.replace(/.* from the /,'')

payer extract

NOTE: this actually causes an error… the failure to pull out atomic statements relating to payments means that if we assume sentences are of the simple form 5000 from the Essar Group (registered April 2013) rather than the more complex 7625 from the Royal Academy of Engineering, 2825 from the University of Warwick, 5000 from the Essar Group (registered April 2013) , we end up parsing out the wrong thing.

error in the parsing

This is one of the reasons why it’s much easier of people publish data as such!

However, let’s proceed, remembering that we have broken the data, but pressing on regardless to see what we might be able to do with it if we go back and correct our errors!

For example, let’s tidy up the data a bit more, casting the Payment Amount column to be a numeric type:

convert to number

We can then use a numeric filter to identify payments within a particular range, for example (remembering that our data may be meaningless!):

numeric facet filter

It’s also worth remembering the other problems and omissions we have introduced into this data – the the inclusion in the pattern recogniser for the amounts was over-zealous and ruled out a match on several organisations, for example; and as well as causing numerical errors, we have missed out information about separate payments that were anded together when they were declared in the same month.

The moral of this story is – look for what the exceptions might be and try to trap them out rather than letting them contaminate your data! Like I didn’t… doh!

That said, at least we’ve made a start, and can work iteratively to improve the quality of the data we have extracted – if we think it is worth spending the time doing so; we also have some data to hand that let’s us start exploring possible structured queries over the data, such as how much funding particular secretariats appear to have managed to bring in across several groups, for example, or how many benefits have been received by groups with a particular MP specified as a contact. (From the original data on Scraperwiki, we can also see which MPs or Lords are on the list of the 20 declared names for each group: do some only appear to join groups that declare benefits?!) In other words, even though we should not at this point trust any of the results of queries, because of the errors that were introduced, we can still start to explore the sorts of queries we might be able to write, which in turn can help us decide whether or not it is worth the effort cleaning the data any further…

…or maybe we should have considered what sort of data model we would be able to get out and interrogate in the first place?!

Flattr this!

Asking Questions of Data – Garment Factories Data Expedition

Tony Hirst - May 24, 2013 in Spreadsheets, SQL

As preparation for the upcoming data expedition on Mapping the garment factories this weekend, several intrepid explorers have been collating data from brand supplier lists that identify the location of over 3,000 factories to date.

The data is being published via a Google Spreadsheet, which means we can also treat it as a database, asking database like queries either within the spreadsheet itself, or via a webservice.

The School of Data blogpost Asking Questions of Data – Some Simple One-Liners introduced the idea of using SQL – the Structured Query Language – to ask questions of a dataset contained in a small database populated with data that had been “liberated” using Scraperwiki. The query language that Google Spreadsheets supports is rather like a cut down version of SQL, so it can be a good place to start learning how to write such queries.

The query language allows us to select just those databasespreadsheet rows where a particular column contains a particular value, such as rows relating to factories located in a particular country or supplying a particular brand. We can also create more complex queries, for example identifying factories located in a particular country that supply a particular brand. We can also generate summary reports, such as listing all the brands identified, or counting all the factories within each city in a particular country.

I’ve posted an informal, minimal interface to the query API as a Scraperwiki view: Google Spreadsheet Query Interface (feel free to clone the view and improve on it!)

spreadsheet explorer config

Paste in the spreadsheet key value, which for the garment factory spreadsheet is 0AvdkMlz2NopEdEdIZ3d4VlFJQ0NkazhrWGFQdXZQMkE, along with the sheet’s “gid” value – 0 for the sheet we want. If you click on the Preview button, we can see the column headings:

spreadsheet query preview headings

We can now start to ask questions of the data, building up a query around the different spreadsheet columns. For convenience (?!), we can use the letters that identify each column in the spreadsheet to help build up out query.

google spreadsheet query form

Let’s start with a simple query that shows all the columns (*) for the first 10 rows (LIMIT 10)

We would write the query out in full as:

SELECT * LIMIT 10

but the form handles the initial SELECT statement for us, so all we need to write is:

* LIMIT 10

Run the query by clicking ion the “Go Fish” button, and the results should appear in a table.

run a query

Note that you can sort the rows in the table by clicking on the appropriate column header. The query form also allows you to preview queries using a variety of charts, although for these to work you will need to make sure you select appropriate columns as we’ll see later.

As well as generating a tabular (or chart based) view of the results, the running the query also generates couple of links, one to an HTML table view of the results, one to a CSV formatted version of the data.

csv and html output links

If you look at the web addresses/URLs that are generated for these links, you may notice they are “hackable” (Hunting for Data – Learning How to Read and Write Web Addresses, aka URLs ).

Knowing that the spreadsheet provides us with database functionality allows us to do two different things. Firstly, we can run queries on the data to generate subsets of it, for example as CSV files, that we can load into other tools for analysis purposes. Secondly, we can generate reports on the data that may themselves be informative. Let’s look at each in turn.

Generating subsets of data

In the simplest case, there are two main ways we can generate subsets of data. If you look at the column headings, you will notice that we have separate columns for different aspects of the dataset, such as the factory name, the company it supplies, the country or city it is based in, and so on. Not all the separate factory results (that is, the separate rows of data) have data in each column, which is something we may need to be aware of!)

In some reports, we may not want to see all the columns, so instead we can select just those columns we want:

SELECT C, D, K,R LIMIT 10

simple select

To see all the results, remove the LIMIT 10 part of the query.

We can also rearrange the order of columns by changing the order in which they appear in the query:

SELECT D, C, K, R LIMIT 10

reorder columns

The second form of subsetting we can do is to limit the rows that are displayed dependent on whether they contain a particular value, or more specifically, when the value of a cell from that row contains a particular value in a particular column.

So for example, here’s a glimpse of some of the factories that are located in India:

SELECT D, C, K, R WHERE K='INDIA' LIMIT 10

some Indian factories

To see all the countries that are referenced, some query languages allow us to use a DISTINCT search limit. The Google Query Language does not support the DISTINCT operator, but we can find another way of getting all the unique values contained within a column using the GROUP BY operator. GROUP BY says “find all the elements in a column that have the same value, and for each of these groups, do something with it”. The something-we-can do might simply be to count the number of the rows in the group, we which can do as follows (COUNTing on any column other than the GROUP BY column(s) will do).

SELECT K, COUNT(A) GROUP BY K LIMIT 10

finesse distinct

See if you can work out how to find the different Retailers represented in the dataset. Can you also see how to find the different retailers represented in a particular country?

One of the things you may notice from the above result is that “ARGENTINA” and “Argentina” are not the same, and neither are ‘BANGLADESH’ and ‘Bangladesh’…

If we wanted to search for all the suppliers listed in that country, we could do a couple of things:

  • SELECT C, K WHERE K='BANGLADESH' OR K='Bangladesh' – in this case, we accepts results where the Country column value for a given row is either “BANGLADESH” or it is “Bangladesh”.

  • SELECT C, K WHERE UPPER(K)='BANGLADESH' – in this case, we set the cell value to its uppercase equivalent, and then test to see if it matches ‘BANGLADESH’

In a certain sense, the latter query style is applying an element of data cleansing to the data as it runs the query.

Creating Simple Reports

Sticking with Bangladesh for a while, let’s see how many different factories each of the different retailers appears to have in that country.

SELECT C, COUNT(D) WHERE UPPER(K)='BANGLADESH' GROUP BY C

simple report

That works okay-ish, but we could tidy the results up little. We could sort the results by clicking on the count column header in the table, but we could also the the ORDER BY query limit to order the results (ASC for ascending order, DESC for descending). We can also change the LABEL that appears in the calculated column heading.

SELECT C, COUNT(D) WHERE UPPER(K)='BANGLADESH' GROUP BY C ORDER BY COUNT(D) DESC LABEL COUNT(D) 'Number of Factories'

tidied report

As well as viewing the results of the query as a table, if the data is in the right form, we may be able to get a chart view of it.

chart preview

Remember, running the query also generates links to HTML table or CSV versions of the resulting data set.

As you get more confident writing queries, you might find they increase in complexity. For example, in Bangladesh, how many factories are on the supplier list of each manufacturer in each City?

SELECT C, H, COUNT(D) WHERE UPPER(K)='BANGLADESH' GROUP BY C, H ORDER BY COUNT(D) DESC LABEL COUNT(D) 'Number of Factories'

data cleaning issue

Note that as we ask these more complicated queries – as in this case, where we are grouping by two elements (the supplier and the city) – we start stressing the data more; and as we stress the data more, we may start to find more areas where we may have data quality issues, or where further cleaning of the data is required.

In the above case, we might want to probe why there are two sets of results for Varner-Gruppen in Gazipur?

SELECT C, H, D WHERE UPPER(K)='BANGLADESH' AND C='Varner-Gruppen' AND H = "Gazipur"

data cleansing issue?

Hmm… only five results? maybe some of the cells actually contain white space as well as the city name? We can get around this by looking not for an exact string match on the city name, but instead looking to see if the cell value in the spreadsheet CONTAINS the search string we are looking for:

SELECT C, H, D WHERE UPPER(K)='BANGLADESH' AND C='Varner-Gruppen' AND H CONTAINS "Gazipur"

try a contains

That looks a little more promising although it does suggest we may need to do a bit of tidying on the dataset, which is what a tool such as OpenRefine is ideal for… But that would be the subject of another post…

What you might also realise from running these queries is how the way you model your data in a database or a spreadsheet – for example, what columns you decide to use and what values you put into those columns – may directly influence the sorts of questions you can ask of it.

Recap

In this post, we have seen how data stored in a Google Spreadsheet can be queried as if it were stored in a database. Queries can be used to generate views that represent a subset of all the data in a particular spreadsheet sheet (sic). As well as interactive tables and even chart views over the data that these queries return, we can create HTML table views or even CSV file representations of the data, each with their own URL.

As well as selecting individual columns and rows from the dataset, we can also use queries to generate simple reports over the data, such as splitting it into groups and counting the number of results in each group.

Although it represents a cut down version of the SQL query language, the Google Query Language is still very powerful. To see the full extent of what it can do, check out the Google Query Language documentation.

If you come up with any of your own queries over the garment factory database that are particularly interesting, why not share them here using the comments below?:-)

Flattr this!

Asking Questions of Data – Some Simple One-Liners

Tony Hirst - May 13, 2013 in HowTo, SQL

One of the great advantages of having a dataset available as data is that we can interrogate it in very direct way. In this post, we’ll see a variety of examples of how we can start to ask structured questions of a dataset.

Although it’s easy for us to be seduced by the simple search boxes that many search engines present into using two or three keyword search terms as the basis of a query, expert searchers will know that using an advanced search form, or the search limits associated with advanced search form elements, can provide additional power to a search.

For example, adding the site: search limit to a web search, as in site:schoolofdata.org, will limit the results to links to pages on a particular web domain; or using the filetype: search limit will allow is to limit results to just PDF documents (filetype:pdf) or spreadsheet files (using something like (filetype:xls OR filetype:csv) for example).

In many cases, the order in which we can add these search limits to a query is not constrained – any order will do. The “query language” is relatively forgiving: the syntax that defines it is largely limited to specifying the reserved word terms used to specify the search limits (for example, site or filetype) and the grammatical rules that say how to combine them with the limiting terms (“reservedWord colon searchLimitValue”, for example) or combine the terms with each other (“term1 OR term2”, for example).

Rather more structured rules define how to construct – and read – a web address/URL, as described in Hunting for Data – Learning How to Read and Write Web Addresses, aka URLs.

When it comes to querying a database – that is, asking a question of it – the query language can be very structured indeed. Whilst it is possible to construct very complex database queries, we can achieve a lot by learning how to write come quite simple, but nonetheless still powerful, queries over even a small database.

A short while ago, I collected some data about the candidates standing for a local election in the UK from the poll notices. You can find the data on Scraperwiki: Isle of Wight Poll Notices Scrape.

Scraperwiki - tables and API

The data from the poll notices is placed into three separate tables:

  • stations – a list of polling stations, their locations, and the electoral division they relate to;
  • candidates – a list of the candidates, along with their party and home address, who are standing in each electoral division;
  • support – a list of the people supporting each candidate on their nomination form.

Let’s look at a fragment of the candidates table:

Scraperwiki - candidates table

You’ll see there are four columns:

  • ward – the electoral division;
  • desc – the party the candidate was standing for
  • candidate – each candidate’s name
  • address – the address of each candidate.

If we go to the Scraperwiki API for this scraper, we can start to interrogate this data:

Scrapewiki - API overview

If we Run the query, we get a preview of the result and a link to a URL that contains the output result presented using the data format we specified (for example, HTML table or CSV).

Scraperwiki - API run

If we click on the result link, we get what we asked for

Scraperwiki output

So let’s have a look at some of the queries we can ask… The query language we’re using is called SQL (“sequel”) and the queries we’ll look at are made up of various parts. If you’re copying an pasting these queries into the Scraperwiki API form, note that sometimes I use single quotes that this WordPress theme may convert to some other character…which means you may need to retype those single quotes…:

  • A bit that says what columns we want to SELECT in our output table;
  • an optional bit that specifies WHERE we want some conditions to be true;
  • an optional bit that says what we want to GROUP the results BY.

Here are some example queries on the candidates table – click through the links to see the results:

Show everything (*): SELECT * FROM candidates

How can we limit the data to just a couple of columns? Show all the candidates names and addresses, but nothing else: SELECT candidate, desc FROM candidates (If you swap the order of the column names in the SELECT part of the query, they will display in that swapped order…)

How can we find out what different values occur within a column? Show the unique electoral divisions: SELECT DISTINCT ward FROM candidates (Can you figure out how to get a list of the different unique party names (desc) represented across the wards in this election?) If we SELECT multiple columns, the DISTINCT command will display the unique rows.

How can we select just those rows where the contents of one specific column take on a particular value? Show the names of the candidates standing as Independent candidates, and the electoral division they are standing in: SELECT candidate, ward, desc FROM candidates WHERE desc='Independent'

How can we rename column labels? For example, the “desc” column name isn’t very informative, is it? Here’s how we can list the different parties represented and rename the column as “Party”, sorting the result alphabetically: SELECT DISTINCT desc AS Party FROM candidates ORDER BY Party

How do we select rows where the contents of one specific column contain a particular value? Which electoral divisions are named as electoral divisions in Ryde? SELECT DISTINCT ward FROM candidates WHERE ward LIKE '%Ryde%' (the % characters are wildcards that match any number of characters).

What do we need to do in order to be able to search for rows with particular values in multiple columns? Find out who was standing as a Labour Party Candidate in the Newport electoral divisions: SELECT DISTINCT ward, candidate, desc AS Party FROM candidates WHERE ward LIKE 'Newport%' AND desc='Labour Party Candidate'

How about if we want to find rows where one column contains one particular value, and another column doesn’t contain a particular value? Find candidates standing in Newport electoral divisions that do not appear to have a Newport address: SELECT * FROM candidates WHERE ward LIKE 'Newport%' AND address NOT LIKE '%Newport%'.

Let’s do some counting, by grouping rows according to their value in a particular column, and then counting how many rows are in each group… How many candidates stood in each electoral division: SELECT ward AS 'Electoral Division', COUNT(ward) AS Number FROM candidates GROUP BY ward We can order the results too… SELECT ward AS 'Electoral Division', COUNT(ward) AS Number FROM candidates GROUP BY ward ORDER BY Number Notice how I can refer to the column by it’s actual or renamed value in the BY elements. To sort in descending order, use ORDER BY Number DESC (use ASC, the default value, to explicitly state ascending order).

Let’s count some more. How many candidates did each party field? SELECT desc AS Party, COUNT(desc) AS Number FROM candidates GROUP BY Party ORDER BY Number DESC

Let’s just look at part of the HTML table output of that query for a moment…

Scraperwiki - IW Council candidates by party

If we click in the browser window and select all of that data, we can paste it into a Datawrapper form:

Datawrapper paste

We can then start to generate a Datawrapper chart… Note that one of the Party names is missing – we can click on the back button and just add in Unknown (copy a tab separator from one of the other rows to separate the Party name for the count…) Here’s the final result:

Datawrapper chart

As we get more confident writing queries, we can generate ever more complex ones. For example, let’s see how many candidates each party stands per electoral division (some of them returned two councillors): SELECT ward AS 'Electoral Division', desc AS Party, COUNT(ward) as Number FROM candidates GROUP BY Party,ward ORDER BY Number DESC. We can use this results table as the input to another query that tells us how many electoral divisions were fought by each party: SELECT Party, COUNT(Party) AS WardsFought FROM (SELECT ward AS 'Electoral Division', desc AS Party, COUNT(ward) as Number FROM candidates GROUP BY Party,ward ORDER BY Number DESC) GROUP BY Party.

To check this seems a reasonable number, we might want to count the distinct number of wards: SELECT COUNT(DISTINCT ward) AS 'Number of Electoral Divisions' from Candidates.

Where the parties did not stand in all the electoral divisions, we might then reasonably wonder – which ones didn’t they stand in? For example, in which electoral divisions were the Conservatives not standing? SELECT DISTINCT ward from Candidates WHERE ward NOT IN (SELECT DISTINCT ward FROM Candidates WHERE desc LIKE '%Conservative%')

Hopefully, you will have seen how by getting data into a database, we can start to ask quite complex questions of it using a structured query language. Whilst the queries can become quite complex (and they can get far more involved than even the queries show here), with a little bit of thought, and by building up from very simple queries and query patterns, you should be able to start running your own database queries over your own data quite quickly indeed…

See also: Using SQL for Lightweight Data Analysis

Flattr this!

Hunting for Data – Learning How to Read and Write Web Addresses, aka URLs

Tony Hirst - May 9, 2013 in HowTo

In every data explorer’s toolbox, there is likely to be a range of tools and techniques that have proven their worth again and again. For discovering data on the web, both the public web and private, or closed, corporate intranets, being able to read a URL is one of the backpocket tools we can make use of on a daily basis.

When running web searches, being able to refine our search queries based on information we have about a website’s architecture, on the format of documents we desire to find, or simply by knowing the address of the website we are likely to find the information on, can aid the discovery process.

When writing web scrapers, programmes that literally “scrape” data from an arbitrary website so that we can work with it in our own databases, having a good knowledge of how to structure URLs, and how to spot patterns across them, can prove an invaluable source of shortcuts.

URLs – Uniform Resource Locators [W3C specification] – are perhaps more commonly known as web addresses or web locations. Whenever ever you see a web address such as schoolofdata.org or http://datajournalismhandbook.org/1.0/en/getting_data_3.html, that’s a URL.

You can find the web address for the page you are currently on by looking in the location bar or address bar at the top of your browser:

2013-04-19_1219

Whenever you click on a link in a web page, your browser will load the web page – that is, “go to” the web address – associated with the link. If you hover your mouse cursor over a link – such as this one – you should be able to see the web address that clicking on the link will transport you to in the bottom left corner of the browser:

2013-04-19_1223

In the simplest case, the contents of a web page are determined solely by what appears in the location bar. (Web servers are also capable of using other pieces of information to decide what to present in a web page, but we won’t be concerned with them in this post…)

The Anatomy of a URL

For the purposes of getting started, we can think of web addresses as comprising four parts, although only the first of them, the domain, is a must have:

  • the domain, such as schoolofdata.org or www.cse.org.uk. The domain may be further broken down to the top-level domain, such as .com, .eu, or .org.uk; a registered name within that top-level domain, such as okfn or schoolofdata; and subdomain, such as www. It is worth remembering that organisation may actual register a name across several top-level domains (for example, UK companies may register their name as a .co.uk address as well as a .com address. Sometimes this different addresses are redirected so that they point to the same website, sometimes they point to different websites altogether.
  • the path: path elements are like nested folders or directories on a computer and are separated by forward slashes (/). In a URL such as http://datajournalismhandbook.org/1.0/en/getting_data_3.html the path is represented by /1.0/en. Sometimes you may be able to make a guess at what path elements represent. In this, I case that 1.0 is the version of the handbook and en is the language code for English, so I’m guessing this path leads us to version 1.0 of the handbook in English.
  • the page, or name of the document or resource we’re going to load. For example, getting_data_3.html is a page that can be found in the /1.0/en directory on the datajournalismhandbook.org domain.
  • the variables, “arguments” or parameters: sometimes you may notice a ? in a URL, as for example in this URL for a search results page, https://schoolofdata.org/search/?q=open+data. (You might notice there is no “page” element in that web address. That’s fine. There are several situations where this may occur, such as when a “default page” (such as index.html) is presented, or when the URL is a ‘prettified” version that masks a clunkier URL on the server and from which the page is actually served.) It is not uncommon to find pages where the query is followed by a series of ampersand (&) separated argument statements, showing that several variable settings are being used to determine what to display on the page. In many cases, the order in which these are presented does not matter (although on some websites it does!)

Tinkering with URL Arguments – Custom Searches

Once you start to get a feel for how to read URLs, you can start to hack them. Let’s start with a search query, such as https://schoolofdata.org/search/?q=open+data. Click on the link, and then see if you can change the URL to produce a search for OpenRefine.

Here’s how I changed the URL: https://schoolofdata.org/search/?q=openrefine

Many websites use the q= argument to denote a search query. For example, here’s a search on the DuckDuckGo web search engine: https://duckduckgo.com/?q=data+wrangling

Sometimes, you might find there are other arguments you can change directly in the URL bar to limit a search. How might you change this URL for a search on the World Bank site to search between the start of April 2009 and the end of June 2012?

Can you see from the URL where the search term is described? Can you edit the URL to search for mortality indicator africa, without a date range?

If you look at the World Bank search page, you’ll notice that there are a series of “facets” that allow you to limit your search to a particular class of results.

2013-04-19_1230

What happens to the URL if you select one of those facets, such as the data facet?

In this case, a path element, rather than the query argument, has changed.

Could you make a guess at how to change the URL to limit the search to some of the other top level facets, such as research, or operations? Try it – if it doesn’t work, you won’t break the web; and you can always click on one of the original links to see what the actual URL looks like.

Try limiting a search using the other search filter links, such as by Type or Database in the data area. Watch how the URL changes – do you think you could navigate the search options simply by hacking the URL?

On many websites, advanced search form fields tend to map on to URL arguments. If you know how to read and write a URL, you can often create advanced custom searches yourself simply by tweaking the URL.

Using URL Information to Refine Web Searches

It’s also worth noting that we can use information gleaned from reading a URL to refine a web search. For example, many web search engines support search limits (advanced search features) that let you limit the results displayed in specific ways. For example, adding the search limit:

  • site:gov.uk

will limit search results to pages hosted on the .gov.uk domain. Try one. We can also use the site limit to search within a particular domain, as for example here: site:bristol.gov.uk underspend. This trick can be particularly useful searching your own organisations’s website if its own search functionality isn’t up to much!

If you notice a particular path element in a URL, you can often use that to limit a search to results that contain that path. For example, looking at the UK Drinking Water Inspectorate research report archive, I notice that reports have URLs of the form:

  • http://dwi.defra.gov.uk/research/completed-research/reports/DWI70-2-206exsum.pdf

Using this as a crib, I could search for additional reports using an “in URL” limit such as inurl:research/completed-research/reports to search for documents that lay down that path.

We can also go fishing, for example for data relating to school dropouts as recorded in a spreadsheet (filetype:xls) file stored down a inurl:reports path on a site:.edu domain: filetype:xls site:.edu inurl:reports dropout.

Hacking URLs on the Off-Chance…

Whenever I see numbers in a URL, I wonder if they’re hackable. For example, consider the URL:

I guess that the 1.0 refers to the version of the handbook. If the handbook went to a second edition, it might be that the we would be able to find it via the path element 2.0. Or may ealier point releases are available (in this case, I don’t think there are…). There is also a number on the page element. Perhaps if we changed the URL to point to getting_data_2.html it would still work?

Many social media sites use user names as a URL path element. Many people or organisations try to use the same username on different websites, not only for “brand consistency” but also because it’s easier to remember. For example, the Open Knowledge Foundation often uses the username okfn. If I tell you that the following website patterns exist on different social media sites:

  • flickr.com/people/USERNAME
  • twitter.com/USERNAME

do you think you could find the Open Knowledge Foundation pages on those sites?

Here’s a trick that starts to get us on the way to trying to finding a few more – search Google looking for okfn in page URLs: https://www.google.co.uk/search?q=inurl:okfn

Tweaking Path Elements

In many cases, the path structure of a URL replicates the structure of a website, or part of the website. Consider this URL for a post on the School of Data blog:

Q: What happens if we remove the page component to give a URL of the form https://schoolofdata.org/2013/02/19/ ?

A: You get all the posts posted on that day.

If you know the system that is being used to publish the content, you can sometimes hack the URL further. For example, I know that the School of Data blog is hosted on WordPress, and I know that WordPress has a range of URL arguments for tweaking the display. For example, I can display the posts in the order in which they were created by adding the ?order=asc parameter to the URL: https://schoolofdata.org/2013/02/19/?order=asc

(See also: Viewing WordPress Posts in Chronological Order. WordPress also has considerable support for delivering machine readable RSS feeds of content hosted on a WordPress blog which can be obtained with a feed tweaks to a WordPress URL: WordPress Codex: Feeds.)

How would you change the URL to find posts published on the School of Data blog in November 2012?

Formatting Data Via the URL

Sometimes it may be possible to tweak URL parameters in order to obtain different output formats for a document. For example, if you have the key for a public spreadsheet on Google Docs, you should be able to work out how to create a URL that lets you obtain a copy of the file in a particular document format, such as a text based CSV (comma separated variable) file, as an Excel spreadsheet (filetype: xls) or as a PDF file.

For example, can you spot which part of the following URL specifies the output type as CSV?

Take a guess at a URL that might display an HTML (web page) version of this spreadsheet. Or a PDF version.

You can also generate these URLs from within a Google Spreadsheet, via the File/Publish menu:

2013-04-19_1233

When working with Google spreadsheets, you may find some documents contain multiple sheets. Some file formats, such as CSV, only display a single sheet at a time. By inspecting the URL, can you see where the ID number for a particular sheet may be specified:

HINT: parameter names that contain id often refer to unique or global identifiers… In addition, computer folk often start counting from 0 rather than 1…

Google Spreadsheets can generate these URLs because they are constructed out of particular elements. If you learn to read and write the structure, you can start to generate these URLs yourselves. In the case of Google Spreadsheets, just by knowing the key value of the spreadsheet you can then start to generate valid URLs for different formats or views onto the spreadsheet.

Summary

In this post, you’ve seen how web addresses, or URLs, are structured, and learned how to start reading them. Hopefully, you’ve also tried rewriting a few, too, and maybe even started writing some from scratch.

If you find yourself working with a particular website a lot, it can often be useful to get a feel for how the URLs are structured in order to navigate the site in ways that may be difficult to traverse via links on the website pages themselves.

Websites published using common web platforms can often be recognised from the way their URLs are structured. If you know how to handcraft URLs for those platforms, you may be able to tweak a URL to get hold of the information in a format that is more useful to you than a traditional page view. For example, you can obtain a structured RSS version of posts on a particular topic from a WordPress blog (which can be useful for syndication or screenscraping purposes), or given just a key for a public Google Spreadsheets document, construct a URL that points to a CSV version of the second sheet of that spreadsheet.

Flattr this!

Proving the Data – A Quick Guide to Mapping England and Wales Local Elections

psychemedia - May 2, 2013 in HowTo, Mapping

If the role of news journalists is in part to hold the powers that be to account, whose role is to make sure that claimed releases of public open data are fit for purpose, or that appropriately licensed data is available for civic public use?

With local elections coming round again in the UK (Full Fact provide a good overview of what’a going on: Local elections: the who, what and why), we have an opportunity to put some of the open data released by UK local and county council elections to a practical test. THis post will focus in particular on geographical data, which means we can also have some fun learning how to make maps…

Your mission, should you choose to accept it, is to poke around a UK county council website (and maybe a local council website too) to see if they provide the data to make it easy to generate maps of that area. This data could then be used as the basis not just for reports on a local election, but also potentially for other civic use cases. Whilst much of the data may also be available via national datasets, many users of civic data in particular are more likely to be interested in data at a local level. Moreover, these users won’t necessarily have the tools, or the skills, required to download and process sometimes quite large datasets in order to extract just the local data of interest. Which is why we need to prove them at a local level.

So here are a few quick recipes for generating maps around some of the election data using open data and a variety of free tools we can find scattered around the web.

We’ll look at a couple of things in particular:

  • create your own polling station map from a KML source file of location markers;
  • create your own boundary line map of electoral wards or divisions using a boundary line data.

The sorts of maps we’ll generate are of two kinds. The first is just to plot markers onto a map. This approach can be ideal for plotting the location of polling stations, for example. Markers also provide the basis for proportional symbol maps, where the area of a symbol (such as a circle) plotted at a particular point (such as the centroid, or “middle point” of the electoral area) is used signify a numerical quantity, such as some function of the size of the majority, for example. The second is to plot boundary lines, such as the boundaries of local wards for unitary authority councils, or the larger electoral wards used for county council elections, to mark out the different population areas that are used to elect each representative. This boundary lines mark out areas that can be coloured, for example according to the party that won the corresponding seat, or maybe the swing in the vote of the winner compared to the previous election.

Create Your Own Polling Station Map from a Source File
The first example we’ll consider is how easy it is to generate a map of polling station locations. A web search for uk polling station location data or polling station uk council election turns up some candidates, several of which appear to be linked to from data.gov.uk, the UK’s central public open data registry as KML formatted files. When looking for location data, KML is a “Good Thing” to find because KML is standardised document format for publishing geographical data. So let’s see what we can do with it…

Polling station data

Or not as the case may be. (So how do I flag this download link as broken? On the data.gov.uk site? Or should I try to contact someone at the council directly?!)

Rooting around the corresponding council website, it seems as if they’ve been having a redesign, and decided to publish the locations of the polling stations this time around in a PDF file.

Let’s go back to the web and search a little more; it seems the Lichfield Council website has a KML file of polling stations dating back to 2010 – http://www2.lichfielddc.gov.uk/geo/polling.kml – so let’s see what we can do with it… If you go to Google Maps, and paste the URL of the KML file into the search box, and then hit return, you should find that the file is loaded into Google Maps and the location data it contains plotted on to the map:

KML in Google maps

If you click on the link icon, you can grab a URL that points to that map, with the data plotted onto it, or an embed code that lets you embed that map in your own web page, subject to Google’s terms and conditions! (I haven’t found a reliable previewer for viewing KML data over OpenStreetMap?) Which could be useful?

It seems to be a rare council that actually publishes the location data as such, though. More likely, the data will be locked up (as we have seen) as an address – or even a printed map – in a PDF file. Some local news outlets at least manage to get the data onto a web page, but can we do better?

polling station list

I’ve posted a recipe elsewhere (A Simple OpenRefine Example – Tidying Cut’n’Paste Data from a Web Page) that describes how we can use a tool call OpenRefine to tidy up the address data a bit to get it to look like this (at least, when the data’s viewed in a table layout rather than as raw CSV;-):

data in a fusion table

Having got the data into a nice, tidy form, we can now import it into an application that can geocode the addresses; that is, that can find the latitude and longitude for the the locations represented by the addresses. I haven’t used Google’s new Maps Engine Lite yet, but I believe it can accept CSV files as long as one of the columns contains geocodable data, such as an address…

Maps ENgine Lite

Let’s create a new map:

new map

Clicking on the upload link means we can upload some data:

import data

The Maps Engine fully expects some location related data that ist can set it’s geocoder on to, so what column contains the address data?

WHere's the location data

We should also provide a meaningful label for each address marker (though this data set doesn’t really have that…)

Now pick a marker label (which I donlt really have)

Here’s the result:

and there they are...

Any locations the geocoder has a problems with are identified – click on the Data link to pop open a data table view that highlights the problematic rows:

something wrong with these?

If you double click in a cell, it becomes editable…

To share your own map, click on the green Share button in the top right hand corner and then change the privacy setting…

to share, you need to make the map viewable by all, or at least by others with the link...

Note: there are other ways we could geocode this data. Simple Map Making With Google Fusion Tables describes how to do it with Google Fusion Tables, and Geocoding Using the Google Maps Geocoder via OpenRefine, but again the co-ordinate data will be subject to Google license conditions. Check out the School of Data blog thread on geocoding to find some open alternatives.

Create Your Own Boundary Line Map from a Source File
As well as mapping polling stations, we could also try to map out the different electoral wards that are covered by a particular council area, to give us a map that looks something like this for example:

boundary line map in fusion table

Can you see the boundary lines marking out the wards in there?

Once we have boundary lines associated with areas, it’s possible to colour each area according to some other paramater. In the case of an election, this might ne a colour representative of the party that took the seat, for example. (As to how to actually do that, that’ll have to be the subject of another post!)

SO where can we find boundary line data? A quick search on the Cambridgeshire County Council website turns up a possible source for that area:

Let’s see what happens if we take the URL for the County Council electoral divisions KML file and paste it into the Google Maps search box:

http://data.cambridgeshire.gov.uk/data/democracy/cambridgeshire-county-ward-boundaries/ElectoralDivisions.kml

Can you see the black boundary lines marked on the map? Unfortunately, the areas donlt appear to be labelled (the listing down the left hand side is blank), but at least there’s something there.

To actually work with the data, we can load it into Google FUsion Tables. Download the KML file, save it with a .kml suffix, and then import it into Google Fusion Tables. From GOogle Drive, select a new Fusion Table:

fusion table create

and then import the KML file:

Fusion table import

Don’t forget to add provenance information:

Keep tack of whre data came from - provenance

Once the data is loaded, we should see the “geometry” column has been recognised as KML data. The yellow column also shows that FUsion Tables has recgnised that column as a location type too (though we can also change it to just a text column).

the kml data is loaded...

Here’s how the map looks:

and we have a map

To make the map shareable, we need to go to the Share button (top right hand corner of the window):

maye it shareable

and select something suitably public:

shareable by link

If you want to see the map, here it is.

If you need a more powerful mapping tool with which to work with the KML file, QGis is a good place to start…

If you struggle to find shapefiles on your local council website, this recipe might help: Boundary Files for Electoral Wards Covered by a Particular Geography

Summary
This has been a quick tour of how to start proving some of the open public geo-data that councils may be making available. If they aren’t, or if they are and there are problems with it, maybe you should let then know?

Flattr this!

DDJSchool Tutorial: Analysing Datasets with Tableau Public

Lucy Chambers - April 27, 2013 in Events, HowTo

This tutorial is written by Gregor Aisch, visualization architect and interactive news developer, based on his workshop, Data visualisation, maps and timelines on a shoestring. The workshop is part of the School of Data Journalism 2013 at the International Journalism Festival.

Screen Shot 2013-04-27 at 17.34.02

Pre-requisites

  1. Download and install Tableau Public. By now there is only a Windows version available.
  2. Download the dataset eurostat-youth.csv

Loading a CSV file

  1. Click Open data to open the data import window. From the list on the left pick Text File and select eurostat-youth.csv. Make sure that Field separator is set to "Comma". Click OK to proceed.

tableau-csv.png

  1. Note: if this step fails with an error message, try changing your system region to English in Windows control panel (see screenshot). It seems that Tableau has cannot comma-separated values if comma is set as decimal separator for numbers in the system settings.
  2. Tableau now lists all the columns of the table in the data panel on the left. The columns are classified into Dimensions and Measures.

tableau-initial-view.png

  1. The dataset contains the following columns (all data is 2011 and aggregated on NUTS-2 level):
  • secondary_edu: percentage of population with secondary education
  • youth_unemployed: percentage of people that aged between 18 and 24, unemployed and do not participate in education or training.
  • unemployed_15_24M: percentage of unemployed males between 15 and 24.
  • unemployed_15_24F: percentage of unemployed females between 15 and 24.

Analysing a dataset

Now we are going to analyze the dataset using Tableau.

  1. Now drag the field youth_unemployed from Measures to Columns. Then drag secondary_edu to Rows.

tableau-plot-1.png

  1. As you see Tableau computes the sums of the columns instead of plotting the individual values. To fix this we need to right-click the green fields and select Dimension.

tableau-plot-2.png

  1. If both fields are set to be treated as dimensions you should see a scatterplot like shown in the following screenshot. You can see that there is a negative correlation between education and youth unemployment.

tableau-plot-3.png

  1. Now drag the field country from Dimensions to Color to color the plot symbols by country. You can also drag the country to Shape to change the icon.

tableau-plot-4.png

  1. Add the fields country and geo_name to the Detail mark to include that piece of information in the tooltips.
  2. Now you can use the color legend and quick filters to highlight and hide certain countries.
  3. Focus on Turkey
  4. Plotting unemployment by gender

Bonus: creating a map with Tableau

  1. Now we can create a map easily: select the dimension lat and lon together with the measure count (while holding the Ctrl key) and click on Show Me to expand the list of suggested visualizations. Then click on the icon of the map with the blue circles. Click on Show Me again to hide the panel.

tableau-select-vis.png

  1. Now you should already see the complete table. Tableau is smart enough to use the square roots of the counts for the circles radii automatically, so we don't have to care about this.
  2. You can make the circles transparent by clicking on Color in the panel Marks and moving the transparency slider. To change the size of the circles, click on Size and adjust the slider.
  3. Now drag the field name to the Mark Label to add the city names as labels to the map.

Note: The final steps of this tutorial are going to be added in the coming days. 

Enjoyed this? Want to stay in touch? Join the School of Data Announce Mailing List for updates on more training activities from the School of Data or the Data Driven Journalism list for discussions and news from the world of Data Journalism.

Flattr this!

Creating a Map Using QGis

Lucy Chambers - April 27, 2013 in Events, HowTo

This is the second of the tutorials from the hands-on visualisation session from Gregor Aisch at the School of Data Journalism at the International Journalism Festival in Perugia. In this tutorial we will create a simple map of the Tour de France stations of the last 100 years.

Pre-requirements

    1. Install and configure QGIS.

    1. Install from http://qgis.org. On most systems there should be a one-click installer that guides you through the process.

    2. We need to install the following handy plugins:

  • Add Delimited Text Layer allows us to read and plot points from a CSV file.

  • Edit Any Layer allows us to easily edit CSV layers

  1. In menu click Plugins > Fetch Python Plugins. In the appearing dialog type in edit any in the the filter box to narrow down the list:

  2. Select the plugin and click Install/upgrade plugin. Repeat the same for Add Delimited Text Layer.

  1. Download country shapefile from naturalearthdata.com. We are looking for ne_50m_admin_0_countries.

  2. Download our sample dataset from http://vis4.net/perugia13/tour-de-france.csv.

Creating the base map layer

  1. Click Layer > Add Vector Layer > Browse and select the file 50m_admin_0_countries.shp. That’s the shapefile containing the borders of all countries. Click Open to add finally it to the map.

  2. Filter for countries with ISO code of France. Right-click on the layer and select Query from its context menu. In the text box SQL where clause enter the text: ISO_A3 = ‘FRA’. Make sure to use single-quotes as double-quotes are reserved for addressing column names. Click OK to apply the filter.

  3. Zoom to Metropolitan France. You can simply use the Zoom In tool

    and draw a rectangle around France.

  4. You might have noticed by now that France looks rather compressed. That is because by default QGIS is using the Plate Carree projection (nerdily referred to by its EPSG code EPSG:4326). You can change the projection by clicking the following icon in the lower right of the window:

  5. In the opening dialog activate the checkbox next to “Enable ‘on the fly’ CRS transformation”. Then in the filter text field enter France to search for map projection spezialized for France. For instance you can pick ED50 / France EuroLambert. Click OK to activate the projection.

  6. Let’s change the default styling. Again, right-click the layer and select Properties. The next dialog should be opened with the Style tab selected by default. Click the button Change… to change the layer style.

  7. Now we are going to disable the filling by selecting No Brush in Fill style drop-down. Change the border color to red and increase the border width to 1. Click OK to apply the styling.

  8. By now the resulting map should look like this:

Adding the Tour de France stations

  1. Add delimited layer (CSV of tour de france stations). Click Layer > Add Delimited Text Layer. (If this option is not available, please make sure you installed the corresponding plugin in the first step.) Then click Browse… and select the file tour-de-france.csv that we downloaded previously.

  2. QGIS is smart enough to recognize the format of the CSV file, and it even detects that the columns named lat and lon probably contain the map coordinates. All we need to do is to click OK.

  3. Now QGIS will ask you in what reference system (=projection) the provided coordinates are given. In most cases you will need to pick WGS 84 or EPSG:4326. Just type 4326 in the filter box and select WGS 84. Click OK to finish.

  4. Now our map contains all the locations of Tour de France stations:

  5. Now we are going to size the stations according to how often they have been part of the tour. Right click the layer tour-de-france and select Properties in the context menu.

  6. Change the value in the Size field to a lower value such as 0,5.

  7. Now click Advanced > Size scale field > count to let QGIS use the values in the column count as radius for the symbols.

  8. You might also want to make the symbols more transparent by moving the Transparency slider to 50%.
    Your map should now look like this:

  9. Since we must always size symbols by area, and not radius, we now need to correct our map. As the area of circles depends grows proportionally with the square of the radius, we need to compute the square roots of the counts to get proper radii.

  10. Usually you could have done this already during the data preparation phase and could simply stored another column in the CSV file. Also you can just load the CSV into a spreadsheet tool like Excel and add a new column with the square roots of the counts. However, you can also do this in QGIS using the Edit Any Layer plugin.

  11. null

  12. In the menu Plugins select Edit Any Layer > Create Editable Layer. Select tour-de-france as input layer and chose a name for the output layer. I will simply use tour-de-france-2 here. Click OK to proceed.

  13. You will be asked for the coordinate system again. WGS84 should be selected by default so simply clicking OKshould work.

  14. Now open the attribute table by right-clicking the new layer and selecting Open Attribute Table in the context menu. You will now see all the data stored in the CSV. Activate editing mode by clicking on the little blue pencil icon (see screenshot). Then open the field calculator by clicking on the little calculator icon.

  15. Make sure that Create a new field is checked and enter a meaningful name for the new column, e.g. radius. As the square roots are going to be decimal numbers, select Decimal number (real) as Output field type. Finally enter the following formula into the Expression text field: sqrt(count). The dialog should now look like shown in the following screenshot. Click OK to proceed.

  16. Back in the attribute table you can take a look at the new column (you may have to scroll the table to the right). Now deactivate editing mode by clicking on the blue pencil icon again. QGIS will ask you if you agree to save the changes. Click Save, and Close the attribute table.

  17. Now hide the layer tour-de-france that we created in step 2 by deactivating its checkbox in the layer window on the left. Now we repeat the second step with the new layer (tour-de-france-2), but instead of count we will pick the column radius for sizing the symbols.

  18. If you like, change the color to blue and set the transparency to 50%. Finally the map should look like this:

Exporting to PDF

In the last section we are going to export our map to PDF.

  1. In the menu click File > New Print Composer. The print composer allows us to set up a print layout with our map. Initially the page is empty, but we are going to change this by clicking the icon for Add new map (1) and dragging a rectangle onto the page (2):

  2. Optionally you can disable the black frame by disabling the checkbox General options > Show frame in the panel on the right.

  3. Now in the menu click on File > Export as PDF… to finally save the map as PDF. You can now open the map in other graphic tools such as Illustrator to do some fine tuning (adding title, labels etc).


Enjoyed this? Want to stay in touch? Join the School of Data Announce Mailing List for updates on more training activities from the School of Data or the Data Driven Journalism list for discussions and news from the world of Data Journalism.

Flattr this!

Data Wrapper Tutorial – Gregor Aisch – School of Data Journalism – Perugia

Gregor Aisch - April 27, 2013 in Events, HowTo

By Gregor Aisch, visualization architect and interactive news developer, based on his workshop, Data visualisation, maps and timelines on a shoestring. The workshop is part of the School of Data Journalism 2013 at the International Journalism Festival.

This tutorial goes through the basic process of creating simple, embeddable charts using Datawrapper.

Preparing the Dataset

  1. Go to the Eurostat website and download the dataset Unemployment rate by sex and age groups – monthly average as Excel spreadsheet. You can also directly download the file from here.
  2. We now need to clean the spreadsheet. Make a copy the active sheet to keep the original sheet for reference. Now remove the header and footer rows so that GEO/TIME is stored in the first cell (A1).
  3. It's a good idea to limit the number of shown entries to something around ten or fiveteen, since otherwise the chart would be cluttered up too much. Our story will be about how Europe is divided according to the unemployment rate, so I decided to remove anything but the top-3 and bottom-3 countries plus some reference countries of interest in between. The final dataset contains the countries: Greece, Spain, Croatia, Portugal, Italy, Cyprus, France, United Kingdom, Norway, Austria, Germany.
  4. Let's also try to keep the labels short. For Germany we can remove the appendix "(until 1990 former territory of the FRG)", since it wouldn't fit in out chart.
  5. This is how the final dataset looks like in OpenOffice Calc

dw-prepared-dataset.png

Loading the Data into Datawrapper

  1. Now, to load the dataset into Datawrapper you can simply copy and paste it. In your spreadsheet software look for the Select All function (e.g. Edit > Select All in OpenOffice).
  2. Copy the data into the clipboard by either selecting Edit > Copy from the menu or pressing Ctrl + C (for Copy) on your keyboard.
  3. Go to datawrapper.de and click the link Create A New Chart. You can do this either being logged in or as guest. If you create the chart as guest, you can add it to your collection later by signing up for free.
  4. Now paste the data into the big text area in Datawrapper. Click Upload and continue to proceed to the next step.

dw-paste.png

Check and Describe the Data

  1. Check if the data has been recognized correctly. Things to check for are the number format (in our example the decimal separator , has been replaced with .). Also check wether the row and column headers have been recognized.
  2. Change number format to one decimals after point to ensure the data is formatted according to your selected language (e.g. decimal comma for France).
  3. Now provide information about the data source. The data has been published by Eurostat. Provide the link to the dataset as well. This information will be displayed along with the published charts, so readers can trace back the path to the source themselves.

dw-source3.png

  1. Click Visualize to proceed to the next step.

 

Selecting a Visualization

  1. Time series are best represented using line charts, so click on the icon for line chart to select this visualization.
  2. Give the chart a title that explains both what the readers are seeing in the chart and why they should care about it. A title like "Youth unemployment rates in Europe" only answers half of the question. A better title would be"Youth unemployment divides Europe" or "Youth unemployment on record high in Greece and Spain"
  3. In the introduction line we should clarify what exactly is shown in the chart. Click Introduction and type "Seasonally adjusted unemployment rates of under 25 aged". Of course you can also provide more details about the story.
  4. Now highlight the data series that are most important for telling the story. The idea is to let one or two countries really pop out of the chart, and attract the readers attention immediately. Click Highlight and select Greece and Spain from the list. You might also want to include your own country for reference.
  5. Activate direct labeling to make it easier to read the chart. Also, since our data is already widely distributed, we can force the extension of the vertical axis to the zero-baseline.
  6. We can let the colors support the story by choosing appropriate colors. First, click on the orange field to select it as base color. Then click on define custom colors and pick red for high unemployment countries Greece and Spain. For countries with low youth unemployment such as Germany, Norway and Austria we can pick a green, or even better, a blue tone (to respect the color blind). Now the resulting chart should look like this:

dw-result1.png

  1. Click Publish to proceed to the last step.

 

Publishing the Visualization

  1. Now a copy of the chart is being pushed to the content delivery network Amazon S3, which ensures that it loads fast under high traffic.
  2. Meanwhile you can already copy the embed code and paste it into your newsrooms CMS to include it in the related news article – just like you would do with a Youtube video.

 

Further tutorials can be found on the Datawrapper website

Enjoyed this? Want to stay in touch? Join the School of Data Announce Mailing List for updates on more training activities from the School of Data or the Data Driven Journalism list for discussions and news from the world of Data Journalism.

Flattr this!

 Receive announcements  Get notifications of news from the School in your inbox
Join the discussion Discussion list - have your say: