In this work challenge we will combine Client and Enrollment data, then we will add a non-HMIS dataframe which contains user account information. After these data are merged, we will then parse them for missing data elements and provide a by-user list of data errors.
As stated above, the data needed are:
A dataframe containg HMIS user contact info.
The key to this challenge literally is at the end of every HMIS CSV. Each exported CSV contains some metadata which describes how the data were produced.
The DateCreated should represent when the respective row was actually entered into the HMIS. DateUpdated is the last time that row was modified and saved in the HMIS. The UserID is the case-manager who last modified these data. Lastly, the ExportID is the number which identifies a collection of HMIS CSVs to be in the same batch.
We are going to focus in on the UserID element. Notice, you will not find the usernames, real names, email address, or really any contact information for individual HMIS users. However, having a unique user ID in each CSV would still allow HUD to use internal validity tests to determine the reliability of the user.
For us, we are going to take another source of data containing all of the UserIDs and contact information for the users. Now, this will probably be different each HMIS software vendor. But each vendor should have a way to export a list of the users in the system with their UserID, which will allow us to join these data to the HMIS CSVs.
For those participating in the work challenge from my CoC, I’ll provide a CSV with these user data.
After actual user names are joined to the CSVs, then we will begin to parse the CSVs for data errors. If you aren’t yet familiar with the term parse in computer science, think of it as diagraming a setence where we make the computer do all the work. Instead of a sentence, we will be diagraming a row of data to determine if there are any errors.
What’s an HMIS Data Error?
The HMIS Data Dictionary is specific about what a data error is.
8 – Client doesn’t know
9 – Client refused
99 – Data not collected
Here’s an example of a Client.csv which contains one of each type of error.
Here are the data errors:
Tesa is first name blank
Sarah’s DOB is blank
Fela’s SSN is an incomplete response (must be 9 digits)
Sarah’s SSN is non-determinable
Sarah’s DisablingCondition was not collected.
Tesa refused to provide a VeteranStatus.
We are going to take a HMIS data and join it with a dataframe containing end-user information. Then, we will create a query to subset the dataframe so we get a dataframe which contains only rows with data errors. Lastly, we will get counts of the types of data errors and the names end-users who’ve caused the most data errors.
The data elements we will look into for errors:
To get this information we will need to do the following:
Load Client.csv, Enrollment.csv, and Users.xlsx
Left join the clientDf and enrollmentDf.
Left join the usersDf to the result of step 2.
Parse the data elements listed above for data errors
Create a dataframe which contains only rows with data errors
Use the SQL Count function to count the number of data errors by the element list above.
Use the SQL Count function to count how many times a end-users name is associated with a row containing errors.
Create a dataframe of these counts
Save the dataframe containing the error counts into an Excel file (.xlsx)
Below are the resources which should help for each step:
We’ve worked a bit with Comma Separated Values (.csv) files, but it they aren’t the only way to store data. There are a lot of data storage formats, each with its strengths and weaknesses. One of the deficits of the CSV format is it cannot store formatting or graphs. This is the reason Excel format (.xls or .xlsx) has become another industry standard.
Excel is a program created by Microsoft to allow people to easily work with spreadsheets. With it, they created a way of storing data which allows for formatting and other information to be included. In fact, Excel documents have become so sophisticated programmers can include entire programs within the document. This is the reason you’ll often get the “Enable Content” button when open Excel document. That means there is some code embedded in the Excel document which will run if you say “Enable”. (Be careful, malicious programmers can insert code which could royally blink up your computer.)
When working with HMIS data being able to load and write Excel documents is necessary. Unfortunately, it adds a lot of complexity.
There are several R libraries which will allow us to work with Excel documents in R. They have different strengths, therefore, I’ll focus on two libraries, rather than one.
Installing either of these libraries should be as simple as running the following code:
However, there are lots of ifs. Both of these libraries rely on the rJava library. Unfortunately, there is often some mismatch of computer architecture. What does that mean? Well, often you’ll install R for amd64, but rJava is easiest to get working with R for i386.
Just know, RStudio has a way to set the version of R you are using by going to Tools then go to Global Options. If you are in Windows, at the top of the R General section you will see the option to change your R version. If you are having difficulty getting the above Excel documents working, try switching the R version to i386. (Don’t forget to restart RStudio after switching.)
Past this, I’d be more than happy to help you troubleshoot. Just leave a comment below or shoot me an email. However, it can get pretty hairy–especially on a Mac.
Working with XLConnect
Now days, I only use XLConnect to load data from Excel sheets. I’ve just been too lazy to re-write all my code to use one library (which would be openxlsx). It’s my opinion the reason to use XLConnect is it’s a little easier to understand how it loads data. Its weakness is it doesn’t have as much flexibility in formatting Excel documents to be saved on your computer. And it can be confusing to save Excel sheets.
Loading Data from Xlsx Documents
Loading data using XLConnect is a little different than using the read.csv function. Like I stated earlier, Xlsx documents contain other information besides data. One critical piece of information is the sheet number.
Unlike CSVs a single Excel document can contain multiple spreadsheets. Each of these sheets will be broken out in tabs when you open an Excel document
XLConnect doesn’t make any assumptions, it wants you to tell it which sheet you’d like to load.
Here’s how to load an Excel document, the first sheet, in XLConnect:
It is similar to the read.csv() function, but notice the file in the path refers to VI-SPDAT v2.0.xlsx? You want to make sure your file format is either .xlsx or .xls as the readWorkSheetFromFile() function only works with Excel documents.
Also, there are two other parameters. The first, sheet = 1 is telling XLConnect to read in only the first sheet. Just know, you could set it to whatever sheet number you’d like. And for reference, the sheets are 1, 2, 3, 5…etc., left to right when opened in Excel. So, even if your sheets have different names XLConnect will still load the data respective to their numerical order.
The second parameter is startRow = 1. This allows you to tell R where to start the dataframe. For example, if you had a header in your Excel document which didn’t contain data.
We could skip down to row three, where the column headers are, by telling XLConnect startRow = 3.
Writing a Dataframe to Excel Document
Writing Excel documents are a little more complex–and one reason I’m not a huge fan of XLConnect.
Here’s how you’d write an Excel file:
After running this code you should have a file called People.xlsx in your working directory (remember, getwd() will tell provide the working directory). If you open this file, it should look something like this:
This looks a little complex, but it’s just because XLConnect makes it look complex. Here’s what it is is doing:
A workbook is created, which is a place where worksheets can be stored.
myPeopleWorksheet is created inside the workbook created above. The sheet is called “My People”
The worksheet has our peopleDf added to it, then it is saved as a file called “People.xlsx” in our working directory.
Like I said, it’s a lot of unneeded complexity, in my opinion.
Why use Excel Documents
After the added complexity of reading and saving Excel documents you might wonder what the benefit is? Great question.
As stated at the beginning, Excel documents can contain other information besides just data. It contain formatting, images, graphs, and a lot of other stuff. And one of the reasons for writing report scripts is to automate all redundant tasks.
Imagine, you’ve got a data set of 12,000 participant enrollments. You want to create a spreadsheet which puts the enrollment in descending order. And you want to create this report daily.
If you used the write.csv() you would need to open the CSV after creating it, then manually add the sort to the document, save it as an Excel file, then send it out. I guarantee, after doing that for several weeks you are going to want to find a way to automate it. Especially, if you decide the headers need to have font size 18 as well.
Excel documents allow us to store the formating tweaks and XLConnect allows us to insert them automatically.
Adding formatting can get a little more complex and will be the focus of another article. Also, we will use openxlsx as it is much easier to output formatting, again, just my opinion.
Comparing two or more values is an extremely important concept when talking to computers. In writing a report script, it is is essential. Comparisons allow us to filter to values within a range, allowing us to provide a report of relevant information.
Take the following data:
If you run the above in R you should get a dataframe called peopleDf which looks like this:
It’s a simple table. But let’s say we wanted to get a list of everyone born before 2000-01-01. Of course, we can easily see Timmy is the only person born after 2000. But if our table was thousands of records it wouldn’t be possible to quickly assess.
Luckily, this is pretty straight forward in SQL-R. We will use a less than operator (<). You probably remember this sign from high-school while solving inequalities. However, we will be using it as what’s known as a relational operator.
In short, it states,
Is x less than y
If x is less than y the computer is going to say the statement is true (or 1 in binary). If it is not, then the computer will say it’s false (or 0 in binary). Believe it or not, this simple operation is why you have a device in your pocket which could calculate the internal mass of the sun.
For us, things are a little simpler. We just want to know who was born before 2000. Let’s re-write the statement above with our problem:
Is Sarah’s DOB less than 2000-01-01
Well, what is Sarah’s DOB? 1992-04-01. Let’s rewrite and assess (gah, this feels like high-school algebra again).
Is 1992-04-01 less than 2000-01-01
Hmm. This can get confusing for humans, but more importantly, confusing to computers.
In English, we’d probably state this as,
Did 1992-04-01 come before 2001-01-01?
Essentially, that’s what we are doing. Just know, the computer will translate all dates into a number. This number is how many seconds transpired since 1970-01-01.
Why? On Thursday, January 1st 1970 the Universal Coordinated Time (UTC) was established. Think of it is when the world came together to standardize time. Computer people figured, “Well, if we have to convert dates into a raw number for computers to understand it, it might as well be the number of seconds since UTC was established.”
Ok, enough history lesson. How is this relevant?
Computers convert dates into seconds since 1970-01-01.
Comparing dates is actually comparing numbers.
Taking our statement again, let’s re-write it with the number of seconds since 1970-01-01
Is number of seconds between 1970-01-01 and 1992-04-01 less than number of seconds between 1970-01-01 and 2000-01-01
Is 702,086,400 less than 46,684,800 seconds
Aha, now this makes sense. And the result is true. We can now say, in computer speak: Sarah was born before 2000-01-01.
It’s hard to follow now days. Everything moves quick and we don’t have time to dig into the “Why.” But, like most things, if you want to be good, you must take the time to do so.
The reason we review how computers understand dates is it directly impacts how we write reports. Do you remember the date conversion trick to get dates to work in SQL from R? This is because R holds dates as the number of seconds since 1970 and passes it as a string to SQL. But, then SQL tries to convert the date from a date into seconds again, screwing everything up.
It pays to RFTM.
Filtering Dataframes by Date
Back to the problem. How do we write a script which provides a dataframe of people born before 2000-01-01?
The code is actually pretty simple,
This should provide a nonMillennialsDf dataframe, which contains:
And there we go, for all my nerdsplaining the code’s pretty simple, right?
Well, there are a few gotchas. Notice the date we’ve written. It has the following format YYYY-MM-DD and is surrounded by single quotes. Any time you use dates in SQL they must be written in this format.
Another tricky part is trying to find if a date falls between two dates. Let’s take the peopleDf and write a query which provides everyone who was born between 1998-01-01 and 2005-01-01
Here’s the query.
This should result in a table with only Fela:
It is important to understand, the first comparison removed Sarah, as 1992-04-01 is less than 1998-01-01. Then, the second comparison got rid of Timmy as 2010-01-01 is greater than 2005-01-01.
There is one more critical command in writing robust date comparisons. The NOW() function. This function is different in R and SQL, but pretty much every programming language has a version of the function.
Essentially, the NOW() asks the computer what today’s date is when the script runs.
In SQL-R it looks like this:
This should provide:
And it doesn’t matter when this script is run, it will always insert today’s date in the TodaysDate column. Nifty, right? Trust me, if you don’t see the possibilities yet, give it time. It’ll grow into one of your favorite functions.
Well, we can’t talk about the NOW() function without discussing the DATE() function I slipped in there. What does it do?
As we discussed earlier, the computer looks at dates as the number of seconds since 1970-01-01. When you use the NOW() function by itself then it will return the number of seconds–um, not something humans like to read. The DATE() function says, “Take whatever is inside the parentheses and try to convert it into a human readable date.” Voila! A human readable date.
Let’s get fancy. We can use the NOW() function and our peopleDf to calculate everyone’s age.
This should provide:
Cool, right? Now, it does not matter when this above code of line is run, it will calculate everyone’s age correctly.
One important note, if the date and time are wrong on your computer this calculation will be incorrect.
The nerd-judo which can be done with dates in SQL-R is endless. But this covers a lot of the basics.
If you’ve missed the code bits throughout this article, here it is all at once:
With this work challenge we are going to take the concepts we’ve learned from the first challenge and build on them. We will combine two dataframes derived from Client.csv and Enrollment.csv. Then, we will apply HUD’s formula to get a by-name-list of those who are chronically homeless.
A “chronically homeless” individual is defined to mean a homeless individual with a disability who lives either in a place not meant for human habitation, a safe haven, or in an emergency shelter, or in an institutional care facility if the individual has been living in the facility for fewer than 90 days and had been living in a place not meant for human habitation, a safe haven, or in an emergency shelter immediately before entering the institutional care facility. In order to meet the “chronically homeless” definition, the individual also must have been living as described above continuously for at least 12 months, or on at least four separate occasions in the last 3 years, where the combined occasions total a length of time of at least 12 months. Each period separating the occasions must include at least 7 nights of living in a situation other than a place not meant for human habitation, in an emergency shelter, or in a safe haven.
There are several data elements which will be needed for us to calculate whether someone is chronically homeless. These data elements are reported to case-managers and entered into a HUD Entry Assessment when a client enters a program.
Here’s a list of the data elements we will use:
All of the above data elements are found in the Enrollment.csv. Therefore, similar to the last Challenge, we will need to join the Client.csv and the Enrollment.csv.
We’ve covered how to get all data from CSVs into one dataframe using joins. This Challenge will build on that skill. The new concepts here will be combining logic to get to a specific answer.
For example, let’s take the chronically homeless definition and turn it into something a computer can understand using these logic operators. We can do this by re-writing the definition several times, each time dropping what makes sense to humans and leaving what makes sense to computers.
For example, this should make sense to most humans.
A chronically homeless individual is disabled and been homeless greater than 364 days. Or, is disabled and been homeless greater than three times in three years and the time spent in homelessness adding up to greater than 364 days.
That paragraph seems a little hard to read, right? But still, humans should be able to understand it. Now, let’s look at the same paragraph emphasizing the logic operators.
A chronically homeless individual IS disabled AND been homeless GREATER THAN 364 days. OR, IS disabled AND been homeless GREATER THAN three times in three years AND the time spent in homelessness adding up to GREATER THAN 364 days.
This is skill of a Computational-Thinker, taking a definition like HUD provided and re-write it from something a human would understand into something a computer will understand.
The next step is re-writing the paragraph in something called pseudo-code.
This helps us make sure everything is in place to feed to the computer. The next step will be actually writing the SQL code.
Below is the following code to get chronically homeless:
This may look overwhelming, but that’ll be the purpose of this week’s Challenge, to demonstrate this is code is actually pretty simple when broke down into its basic parts.
That’s the real lesson here, every complex question may be made extremely simple when taken once piece at a time. The power of computational-thinking is extraordinary.
We are going to merge the two data sets and to discover the following:
A list of individuals who are chronically homeless.
Export this list to an Excel document.
To get this information we will need to do the following:
Load the Client.csv into the dataframe clientDf.
Load the Enrollment.csv into the dataframe enrollmentDf.
Inner join the clientDf to enrollmentDf.
Calculate whether someone is chronically homeless.
Filter to those who are chronically homeless.
Write the by-name-list of individuals to an Excel document.
Below are the resources which should help for each step:
Step 1 & 2
R Programming A-Z – Video 41 – Loading and Importing Data in R
I’m fat. Fatter than I want to be. I’ve not always been fat, I got down to 180 at back in 2008. It took counting calories and weight religiously. The key piece for me was having a graph which I looked at daily showing my outcomes. Over the course of a year I lost 40 pounds. Well, it’s time to do it again. I’ve gained that 40 back over 10 years–and now it needs to go.
Back in 2008 I was using Google to give me the calories of every item I ate and recording them in an Excel document. This food journal was great, but a little more work than it probably should have been.
Back then, I wasn’t aware of being a hacker. Now, I plan to throw all my hacker skills at this weight loss plan (hell, I might even go to the gym!)
I signed up for MyFitnessPal. Counting calories worked once, I figure if it aint broke. But then I got to looking at how much work it would take to look at my improvement. I mean, I’d have to actually open the app on my phone and click on the weight loss section. Shesh–who designed that app? Two actions to get where I needed–ain’t no one got time for that.
Enter hacker skills. I discovered there is a Python library which allows scraping of data.
This wonderful little library is written and provided by CoddingtonBear.
I figure, I’d write a Python script to scrap the data, save it to a CSV, create an SQL-R script to join the nutrition and weight information, use ggplot to plot the data, save the plot as a PNG, and then copy this plot to a dedicated spot on Ladvien.com. Lastly, I’d write a bash script to run every night and update the graph. Simples!
And c’mon, opening a webpage is a lot easier than tapping twice.
Well, after a few hours of coding, I’ve got the first step of the project complete.
And then we add some R to join the data together and automate plotting, and saving the plots as images.
Lastly, let’s write a bash script to run the Python and R code, then copy the images to Ladvien.com
And here’s the result:
And my calories:
Next, I’ll probably tweak ggplot2 to make the graphs a little prettier. Also, I’ll setup a Raspberry Pi or something to run the bash script once a night. Why? Lolz.