Working with R Excel Libraries

We’ve worked a bit with Comma Separated Values (.csv) files, but it they aren’t the only way to store data. There are a lot of data storage formats, each with its strengths and weaknesses. One of the deficits of the CSV format is it cannot store formatting or graphs. This is the reason Excel format (.xls or .xlsx) has become another industry standard.

Excel is a program created by Microsoft to allow people to easily work with spreadsheets. With it, they created a way of storing data which allows for formatting and other information to be included. In fact, Excel documents have become so sophisticated programmers can include entire programs within the document. This is the reason you’ll often get the “Enable Content” button when open Excel document. That means there is some code embedded in the Excel document which will run if you say “Enable”. (Be careful, malicious programmers can insert code which could royally blink up your computer.)

When working with HMIS data being able to load and write Excel documents is necessary. Unfortunately, it adds a lot of complexity.

There are several R libraries which will allow us to work with Excel documents in R. They have different strengths, therefore, I’ll focus on two libraries, rather than one.

Installing R Libraries for Excel

Installing either of these libraries should be as simple as running the following code:

install.packages("XLConnect", dependencies=TRUE)
install.packages("openxlsx")

However, there are lots of ifs. Both of these libraries rely on the rJava library. Unfortunately, there is often some mismatch of computer architecture. What does that mean? Well, often you’ll install R for amd64, but rJava is easiest to get working with R for i386.

Just know, RStudio has a way to set the version of R you are using by going to Tools then go to Global Options. If you are in Windows, at the top of the R General section you will see the option to change your R version. If you are having difficulty getting the above Excel documents working, try switching the R version to i386. (Don’t forget to restart RStudio after switching.)

Past this, I’d be more than happy to help you troubleshoot. Just leave a comment below or shoot me an email. However, it can get pretty hairy–especially on a Mac.

Working with XLConnect

Now days, I only use XLConnect to load data from Excel sheets. I’ve just been too lazy to re-write all my code to use one library (which would be openxlsx). It’s my opinion the reason to use XLConnect is it’s a little easier to understand how it loads data. Its weakness is it doesn’t have as much flexibility in formatting Excel documents to be saved on your computer. And it can be confusing to save Excel sheets.

Loading Data from Xlsx Documents

Loading data using XLConnect is a little different than using the read.csv function. Like I stated earlier, Xlsx documents contain other information besides data. One critical piece of information is the sheet number.

Unlike CSVs a single Excel document can contain multiple spreadsheets. Each of these sheets will be broken out in tabs when you open an Excel document

XLConnect doesn’t make any assumptions, it wants you to tell it which sheet you’d like to load.

Here’s how to load an Excel document, the first sheet, in XLConnect:

library(XLConnect)
excelDf <- readWorksheetFromFile("/Users/user/Data/VI-SPDAT v2.0.xlsx", sheet = 1, startRow = 1)

It is similar to the read.csv() function, but notice the file in the path refers to VI-SPDAT v2.0.xlsx? You want to make sure your file format is either .xlsx or .xls as the readWorkSheetFromFile() function only works with Excel documents.

Also, there are two other parameters. The first, sheet = 1 is telling XLConnect to read in only the first sheet. Just know, you could set it to whatever sheet number you’d like. And for reference, the sheets are 1, 2, 3, 5…etc., left to right when opened in Excel. So, even if your sheets have different names XLConnect will still load the data respective to their numerical order.

The second parameter is startRow = 1. This allows you to tell R where to start the dataframe. For example, if you had a header in your Excel document which didn’t contain data.

We could skip down to row three, where the column headers are, by telling XLConnect startRow = 3.

Writing a Dataframe to Excel Document

Writing Excel documents are a little more complex–and one reason I’m not a huge fan of XLConnect.

Here’s how you’d write an Excel file:

######################### Data ###################################
###################### DO NOT CHANGE #############################
peopleDf <- data.frame(PersonalID=c("ZP1U3EPU2FKAWI6K5US5LDV50KRI1LN7", "IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV", "LASDU89NRABVJWW779W4JGGAN90IQ5B2"), 
                       FirstName=c("Timmy", "Fela", "Sarah"),
                       LastName=c("Tesa", "Falla", "Kerrigan"),
                       DOB=c("2010-01-01", "1999-1-1", "1992-04-01"))
##################################################################
##################################################################

# Create a workbook to contain the worksheet(s).
peopleWorkbook <- loadWorkbook("People.xlsx",  create = TRUE)
# Create and name the worksheet.
myPeopleWorksheet <- createSheet(peopleWorkbook, "My People")
# Add the data to the worksheet, put it in the workbook, save it to the computer.
writeWorksheetToFile("People.xlsx", data = peopleDf, sheet = "My People")

After running this code you should have a file called People.xlsx in your working directory (remember, getwd() will tell provide the working directory). If you open this file, it should look something like this:

This looks a little complex, but it’s just because XLConnect makes it look complex. Here’s what it is is doing:

  1. A workbook is created, which is a place where worksheets can be stored.
  2. myPeopleWorksheet is created inside the workbook created above. The sheet is called “My People”
  3. The worksheet has our peopleDf added to it, then it is saved as a file called “People.xlsx” in our working directory.

Like I said, it’s a lot of unneeded complexity, in my opinion.

Why use Excel Documents

After the added complexity of reading and saving Excel documents you might wonder what the benefit is? Great question.

As stated at the beginning, Excel documents can contain other information besides just data. It contain formatting, images, graphs, and a lot of other stuff. And one of the reasons for writing report scripts is to automate all redundant tasks.

Imagine, you’ve got a data set of 12,000 participant enrollments. You want to create a spreadsheet which puts the enrollment in descending order. And you want to create this report daily.

If you used the write.csv() you would need to open the CSV after creating it, then manually add the sort to the document, save it as an Excel file, then send it out. I guarantee, after doing that for several weeks you are going to want to find a way to automate it. Especially, if you decide the headers need to have font size 18 as well.

Excel documents allow us to store the formating tweaks and XLConnect allows us to insert them automatically.

Adding formatting can get a little more complex and will be the focus of another article. Also, we will use openxlsx as it is much easier to output formatting, again, just my opinion.

Comparing Values in R and SQL

Comparative Functions

Comparing two or more values is an extremely important concept when talking to computers. In writing a report script, it is is essential. Comparisons allow us to filter to values within a range, allowing us to provide a report of relevant information.

Take the following data:

######################### Data ###################################
###################### DO NOT CHANGE #############################
peopleDf <- data.frame(PersonalID=c("ZP1U3EPU2FKAWI6K5US5LDV50KRI1LN7", "IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV", "LASDU89NRABVJWW779W4JGGAN90IQ5B2"), 
           FirstName=c("Timmy", "Fela", "Sarah"),
           LastName=c("Tesa", "Falla", "Kerrigan"),
           DOB=c("2010-01-01", "1999-1-1", "1992-04-01"))
##################################################################
##################################################################

If you run the above in R you should get a dataframe called peopleDf which looks like this:

PersonalID FirstName LastName DOB
ZP1U3EPU2FKAWI6K5US5LDV50KRI1LN7 Timmy Tesa 2010-01-01
IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV Fela Falla 1999-1-1
LASDU89NRABVJWW779W4JGGAN90IQ5B2 Sarah Kerrigan 1992-04-01

It’s a simple table. But let’s say we wanted to get a list of everyone born before 2000-01-01. Of course, we can easily see Timmy is the only person born after 2000. But if our table was thousands of records it wouldn’t be possible to quickly assess.

Luckily, this is pretty straight forward in SQL-R. We will use a less than operator (<). You probably remember this sign from high-school while solving inequalities. However, we will be using it as what’s known as a relational operator.

In short, it states,

Is x less than y

If x is less than y the computer is going to say the statement is true (or 1 in binary). If it is not, then the computer will say it’s false (or 0 in binary). Believe it or not, this simple operation is why you have a device in your pocket which could calculate the internal mass of the sun.

For us, things are a little simpler. We just want to know who was born before 2000. Let’s re-write the statement above with our problem:

Is Sarah’s DOB less than 2000-01-01

Well, what is Sarah’s DOB? 1992-04-01. Let’s rewrite and assess (gah, this feels like high-school algebra again).

Is 1992-04-01 less than 2000-01-01

Hmm. This can get confusing for humans, but more importantly, confusing to computers.

In English, we’d probably state this as,

Did 1992-04-01 come before 2001-01-01?

Essentially, that’s what we are doing. Just know, the computer will translate all dates into a number. This number is how many seconds transpired since 1970-01-01.

Why? On Thursday, January 1st 1970 the Universal Coordinated Time (UTC) was established. Think of it is when the world came together to standardize time. Computer people figured, “Well, if we have to convert dates into a raw number for computers to understand it, it might as well be the number of seconds since UTC was established.”

Ok, enough history lesson. How is this relevant?

  1. Computers convert dates into seconds since 1970-01-01.
  2. Comparing dates is actually comparing numbers.

Taking our statement again, let’s re-write it with the number of seconds since 1970-01-01

Is number of seconds between 1970-01-01 and 1992-04-01 less than number of seconds between 1970-01-01 and 2000-01-01

Which becomes:

Is 702,086,400 less than 46,684,800 seconds

Aha, now this makes sense. And the result is true. We can now say, in computer speak: Sarah was born before 2000-01-01.

Why? Really, dude.

In my world there is a saying: RFTM.

It’s hard to follow now days. Everything moves quick and we don’t have time to dig into the “Why.” But, like most things, if you want to be good, you must take the time to do so.

The reason we review how computers understand dates is it directly impacts how we write reports. Do you remember the date conversion trick to get dates to work in SQL from R? This is because R holds dates as the number of seconds since 1970 and passes it as a string to SQL. But, then SQL tries to convert the date from a date into seconds again, screwing everything up.

It pays to RFTM.

Filtering Dataframes by Date

Back to the problem. How do we write a script which provides a dataframe of people born before 2000-01-01?

The code is actually pretty simple,

library(sqldf)
nonMillennialsDf <- sqldf("SELECT * FROM peopleDf WHERE DOB < '2000-01-01'")

This should provide a nonMillennialsDf dataframe, which contains:

PersonalID FirstName LastName DOB
IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV Fela Falla 1999-1-1
LASDU89NRABVJWW779W4JGGAN90IQ5B2 Sarah Kerrigan 1992-04-01

And there we go, for all my nerdsplaining the code’s pretty simple, right?

Well, there are a few gotchas. Notice the date we’ve written. It has the following format YYYY-MM-DD and is surrounded by single quotes. Any time you use dates in SQL they must be written in this format.

Another tricky part is trying to find if a date falls between two dates. Let’s take the peopleDf and write a query which provides everyone who was born between 1998-01-01 and 2005-01-01

Here’s the query.

bornBetweenDf <- sqldf("SELECT * FROM peopleDf WHERE DOB > '1998-01-01' AND DOB < '2005-01-01'") 

This should result in a table with only Fela:

PersonalID FirstName LastName DOB
IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV Fela Falla 1999-1-1

It is important to understand, the first comparison removed Sarah, as 1992-04-01 is less than 1998-01-01. Then, the second comparison got rid of Timmy as 2010-01-01 is greater than 2005-01-01.

Now()

There is one more critical command in writing robust date comparisons. The NOW() function. This function is different in R and SQL, but pretty much every programming language has a version of the function.

Essentially, the NOW() asks the computer what today’s date is when the script runs.

In SQL-R it looks like this:

nowDf <- sqldf("SELECT *, DATE('NOW') As 'TodaysDate' FROM peopleDf")

This should provide:

PersonalID FirstName LastName DOB TodaysDate
ZP1U3EPU2FKAWI6K5US5LDV50KRI1LN7 Timmy Tesa 2010-14-01 2017-07-18
IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV Fela Falla 1999-1-1 2017-07-18
LASDU89NRABVJWW779W4JGGAN90IQ5B2 Sarah Kerrigan 1992-04-01 2017-07-18

And it doesn’t matter when this script is run, it will always insert today’s date in the TodaysDate column. Nifty, right? Trust me, if you don’t see the possibilities yet, give it time. It’ll grow into one of your favorite functions.

Well, we can’t talk about the NOW() function without discussing the DATE() function I slipped in there. What does it do?

As we discussed earlier, the computer looks at dates as the number of seconds since 1970-01-01. When you use the NOW() function by itself then it will return the number of seconds–um, not something humans like to read. The DATE() function says, “Take whatever is inside the parentheses and try to convert it into a human readable date.” Voila! A human readable date.

Age

Let’s get fancy. We can use the NOW() function and our peopleDf to calculate everyone’s age.

peopleWithAgeDf <- sqldf("SELECT *, (DATE('NOW') - DOB) As 'Age' FROM peopleDf")

This should provide:

PersonalID FirstName LastName DOB Age
ZP1U3EPU2FKAWI6K5US5LDV50KRI1LN7 Timmy Tesa 2010-14-01 7
IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV Fela Falla 1999-1-1 18
LASDU89NRABVJWW779W4JGGAN90IQ5B2 Sarah Kerrigan 1992-04-01 25

Cool, right? Now, it does not matter when this above code of line is run, it will calculate everyone’s age correctly.

One important note, if the date and time are wrong on your computer this calculation will be incorrect.

The nerd-judo which can be done with dates in SQL-R is endless. But this covers a lot of the basics.

If you’ve missed the code bits throughout this article, here it is all at once:

######################### Data ###################################
###################### DO NOT CHANGE #############################
peopleDf <- data.frame(PersonalID=c("ZP1U3EPU2FKAWI6K5US5LDV50KRI1LN7", "IA26X38HOTOIBHYIRV8CKR5RDS8KNGHV", "LASDU89NRABVJWW779W4JGGAN90IQ5B2"), 
           FirstName=c("Timmy", "Fela", "Sarah"),
           LastName=c("Tesa", "Falla", "Kerrigan"),
           DOB=c("2010-14-01", "1999-1-1", "1992-04-01"))
##################################################################
##################################################################
library(sqldf)
nonMillennialsDf <- sqldf("SELECT * FROM peopleDf WHERE DOB < '2000-01-01'")
bornBetweenDf <- sqldf("SELECT * FROM peopleDf WHERE DOB > '1998-01-01' AND DOB < '2005-01-01'") 
nowDf <- sqldf("SELECT *, DATE('NOW') As 'TodaysDate' FROM peopleDf")
peopleWithAgeDf <- sqldf("SELECT *, (DATE('NOW') - DOB) As 'Age' FROM peopleDf")

Providing Chronically Homeless List

With this work challenge we are going to take the concepts we’ve learned from the first challenge and build on them. We will combine two dataframes derived from Client.csv and Enrollment.csv. Then, we will apply HUD’s formula to get a by-name-list of those who are chronically homeless.

Data Needed

The current definition of chronically homeless is found in HUD’s federal register:

A “chronically homeless” individual is defined to mean a homeless individual with a disability who lives either in a place not meant for human habitation, a safe haven, or in an emergency shelter, or in an institutional care facility if the individual has been living in the facility for fewer than 90 days and had been living in a place not meant for human habitation, a safe haven, or in an emergency shelter immediately before entering the institutional care facility. In order to meet the “chronically homeless” definition, the individual also must have been living as described above continuously for at least 12 months, or on at least four separate occasions in the last 3 years, where the combined occasions total a length of time of at least 12 months. Each period separating the occasions must include at least 7 nights of living in a situation other than a place not meant for human habitation, in an emergency shelter, or in a safe haven.

There are several data elements which will be needed for us to calculate whether someone is chronically homeless. These data elements are reported to case-managers and entered into a HUD Entry Assessment when a client enters a program.

Here’s a list of the data elements we will use:

  1. DisablingCondition
  2. TimesHomelessPastThreeYears
  3. MonthHomelessPastThreeYears
  4. DateToStreetESSH

All of the above data elements are found in the Enrollment.csv. Therefore, similar to the last Challenge, we will need to join the Client.csv and the Enrollment.csv.

We’ve covered how to get all data from CSVs into one dataframe using joins. This Challenge will build on that skill. The new concepts here will be combining logic to get to a specific answer.

In SQL we will use the following logic operators:

  • IS (==)
  • NOT (!=)
  • AND (&&)
  • OR (||)
  • > (greater than)
  • < (less than)

For example, let’s take the chronically homeless definition and turn it into something a computer can understand using these logic operators. We can do this by re-writing the definition several times, each time dropping what makes sense to humans and leaving what makes sense to computers.

For example, this should make sense to most humans.

A chronically homeless individual is disabled and been homeless greater than 364 days. Or, is disabled and been homeless greater than three times in three years and the time spent in homelessness adding up to greater than 364 days.

That paragraph seems a little hard to read, right? But still, humans should be able to understand it. Now, let’s look at the same paragraph emphasizing the logic operators.

A chronically homeless individual IS disabled AND been homeless GREATER THAN 364 days. OR, IS disabled AND been homeless GREATER THAN three times in three years AND the time spent in homelessness adding up to GREATER THAN 364 days.

This is skill of a Computational-Thinker, taking a definition like HUD provided and re-write it from something a human would understand into something a computer will understand.

The next step is re-writing the paragraph in something called pseudo-code.

Chronic Homeless Individual == 
                
                    A person IS Disabled AND
                    A person > Homeless 364 days

                    OR

                    A person IS Disabled AND
                    A person homeless > 4 times AND
                    A person > 12 months homeless within 3 years

This helps us make sure everything is in place to feed to the computer. The next step will be actually writing the SQL code.

Below is the following code to get chronically homeless:

#############################################
##### Get those with Disabling Condition ###
#############################################
disablingCondition <- sqldf("SELECT PersonalID 
                            FROM clientAndEnrollmentDf 
                            WHERE DisablingCondition = 1")

#############################################
##### Length-of-Stay ########################
#############################################
# Participants who meet the length-of-stay in homelessness requirement
# Either through four or more occurences with cumulative duration exceeding a year
# Or a consequtive year.
#                 113 = "12 Months"
#                 114 = "More than 12 Months"
chronicityDf <- sqldf("SELECT PersonalID, 'Yes' As 'Meets LOS'
                               FROM activeEnrollment
                               WHERE (TimesHomelessPastThreeYears = 4
                                    AND (
                                          MonthsHomelessPastThreeYears = 113
                                          OR MonthsHomelessPastThreeYears = 114)
                                        )
                               OR (CAST(JULIANDAY('now') - JULIANDAY(DateToStreetESSH) AS Integer) > 364
                                   AND (DateToStreetESSH != '') 
                                  )
                               ")

#############################################
##### Chronically Homeless ##################
#############################################
# Take the distinct PersonalIDs of individuals who meet both chronicity
# and disabling condition.
chronicallyHomeless <- sqldf("SELECT DISTINCT(a.PersonalID)
                              FROM chronicityDf a
                              INNER JOIN disablingCondition b
                              ON a.PersonalID=b.PersonalID
                             ")

This may look overwhelming, but that’ll be the purpose of this week’s Challenge, to demonstrate this is code is actually pretty simple when broke down into its basic parts.

That’s the real lesson here, every complex question may be made extremely simple when taken once piece at a time. The power of computational-thinking is extraordinary.

The Goal

We are going to merge the two data sets and to discover the following:

  1. A list of individuals who are chronically homeless.
  2. Export this list to an Excel document.

To get this information we will need to do the following:

  1. Load the Client.csv into the dataframe clientDf.
  2. Load the Enrollment.csv into the dataframe enrollmentDf.
  3. Inner join the clientDf to enrollmentDf.
  4. Calculate whether someone is chronically homeless.
  5. Filter to those who are chronically homeless.
  6. Write the by-name-list of individuals to an Excel document.

The Resources

Below are the resources which should help for each step:

Step 1 & 2

  • R Programming A-Z – Video 41 – Loading and Importing Data in R
  • R Programming A-Z – Video 21 – Functions in R
  • Read and Write CSVs in R

Step 3

  • The Complete SQL Bootcamp – Video #51 – Overview of Inner Joins
  • The Complete SQL Bootcamp – Video #52 – Example of Inner Joins * HMIS, R, SQL – Basics

Step 4 & 5

Step 6

  • Writing Excel Workbooks – Tutorial Coming
Give me MyFitnessPal Data!

I’m fat. Fatter than I want to be. I’ve not always been fat, I got down to 180 at back in 2008. It took counting calories and weight religiously. The key piece for me was having a graph which I looked at daily showing my outcomes. Over the course of a year I lost 40 pounds. Well, it’s time to do it again. I’ve gained that 40 back over 10 years–and now it needs to go.

Back in 2008 I was using Google to give me the calories of every item I ate and recording them in an Excel document. This food journal was great, but a little more work than it probably should have been.

Back then, I wasn’t aware of being a hacker. Now, I plan to throw all my hacker skills at this weight loss plan (hell, I might even go to the gym!)

I signed up for MyFitnessPal. Counting calories worked once, I figure if it aint broke. But then I got to looking at how much work it would take to look at my improvement. I mean, I’d have to actually open the app on my phone and click on the weight loss section. Shesh–who designed that app? Two actions to get where I needed–ain’t no one got time for that.

Enter hacker skills. I discovered there is a Python library which allows scraping of data.

This wonderful little library is written and provided by CoddingtonBear.

I figure, I’d write a Python script to scrap the data, save it to a CSV, create an SQL-R script to join the nutrition and weight information, use ggplot to plot the data, save the plot as a PNG, and then copy this plot to a dedicated spot on Ladvien.com. Lastly, I’d write a bash script to run every night and update the graph. Simples!

And c’mon, opening a webpage is a lot easier than tapping twice.

Well, after a few hours of coding, I’ve got the first step of the project complete.

import myfitnesspal
import csv, sys, os
from datetime import datetime

# Get account info
client = myfitnesspal.Client('cthomasbrittain')
# Set start year
startYear = "2008"
# Get limits
beginningDate = datetime.strptime(startYear, "%Y").date()
beginningYear = beginningDate.year
daysInMonth = {1:31, 2:28, 3:31, 4:30, 5:31, 6:30, 7:31, 8:31, 9:30, 10:31, 11:30, 12:31}
emptyNutrition = [None, None, None, None, None, None]

print("")
print("################################################")
print("# Scraping MyFitnessPal                        #")
print("# Make sure your account is set to public      #")
print("# and your username and pass are in keychain   #")
print("################################################")
print("")

today = datetime.now().date()
currentYear = today.year

print("")
print("################################################")
print("# Get nutrition and weight information         #")
print("################################################")
print("")

# Loop over years from beginingYear.  Make sure last year is inclusive.
for yearIndex in range(beginningYear, currentYear+1):
    
    # Create a file name based on this year's data
    thisFileName = "healthData_%s.csv" % yearIndex

    # Open CSV as read and write.
    # If file exists, open for read / write
    #   else, create file, write only.
    try:
        f = open(thisFileName, "r+")        # Check to see if file is complete,
        row_count = sum(1 for row in f)     # else, overwrite the file
        if(row_count != 366):               # A year of rows plus headers, and an empty line at end.
            f = open(thisFileName, "w+")
            row_count = 0
    except EnvironmentError:
        f = open(thisFileName, "w+")        # If file does not exist, create it.
        row_count = 0
    
    writer = csv.writer(f)
    
    # Check number of lines. If the year wasn't captured, start over.
    if(row_count < 365):
        # Write headers for totals
        writer.writerow(["Date", "Sodium", "Carbohydrates", "Calories", "Fat", "Sugar", "Protein", "Weight"])
        sys.stdout.write(str(yearIndex)+": ")   # Print has a linefeed.
        sys.stdout.flush()
        for monthIndex in range(1, 12+1):
                
            beginningOfMonthStr = "%s-%s-%s" % (yearIndex, monthIndex, 1)
            endOfMonthStr = "%s-%s-%s" % (yearIndex, monthIndex, daysInMonth[monthIndex])

            beginningOfMonth = datetime.strptime(beginningOfMonthStr, "%Y-%m-%d").date()
            endOfMonth = datetime.strptime(endOfMonthStr, "%Y-%m-%d").date()
            
            thisMonthsWeights = dict(client.get_measurements('Weight', beginningOfMonth, endOfMonth))

            for dayIndex in range(1, daysInMonth[monthIndex]+1):
                
                fullDateIndex = "%s-%s-%s" % (yearIndex, monthIndex, dayIndex)
                thisDate = datetime.strptime(fullDateIndex, "%Y-%m-%d").date()
                if(thisDate > today):
                    break;

                thisDaysNutritionData = client.get_date(yearIndex, monthIndex, dayIndex)
                thisDaysNutritionDataDict = thisDaysNutritionData.totals
                thisDaysNutritionValues = thisDaysNutritionDataDict.values()

                thisDaysWeight = [(thisMonthsWeights.get(thisDate))]
                
                if(len(thisDaysNutritionValues) < 6):
                    thisDaysNutritionValues = emptyNutrition

                dataRow = [fullDateIndex] + thisDaysNutritionValues  + thisDaysWeight
                if dataRow:
                    writer.writerow(dataRow)

            sys.stdout.write("#")
            sys.stdout.flush()
        print(" -- Done.")
        f.close()
    else:
        print((str(yearIndex)+": Exists and is complete."))

And then we add some R to join the data together and automate plotting, and saving the plots as images.

library(ggplot2)
library(scales)

cat("*******************************************************\n")
cat("* Starting R                                          *\n")
cat("*******************************************************\n")
cat("\n")
cat("*******************************************************\n")
cat("* Combining Health Data                               *\n")
cat("*******************************************************\n")
cat("\n")
# Thanks Rich Scriven
# https://stackoverflow.com/questions/25509879/how-can-i-make-a-list-of-all-dataframes-that-are-in-my-global-environment
healthDataRaw <- do.call(rbind, lapply(list.files(pattern = ".csv"), read.csv))
# Fill in missing values for calories
healthDataRaw$Calories[is.na(healthDataRaw$Calories)] <- mean(healthDataRaw$Calories, na.rm = TRUE)

date30DaysAgo <- Sys.Date() - 30
date90DaysAgo <- Sys.Date() - 90
date180DaysAgo <- Sys.Date() - 180

cat("*******************************************************\n")
cat("* Creating Weight Graph                               *\n")
cat("*******************************************************\n")
healthData <- healthDataRaw[!(is.na(healthDataRaw$Weight)),]
healthData$Date <- as.Date(healthData$Date)
healthData <- with(healthData, healthData[(Date >= date30DaysAgo), ])
p <- ggplot(healthData, aes(x = Date, y = Weight))+
  geom_line(color="firebrick", size = 1) +
  labs(title ="Ladvien's Weight", x = "Date", y = "Weight")
p
ggsave("ladviens_weight.png", width = 5, height = 5)

cat("\n")

cat("*******************************************************\n")
cat("* Creating Calories Graph                             *\n")
cat("*******************************************************\n")
cat("\n")
#healthData <- healthDataRaw[!(is.na(healthDataRaw$Calories)),]
healthData$Date <- as.Date(healthData$Date)
healthData <- with(healthData, healthData[(Date >= date30DaysAgo), ])
p2 <- ggplot(healthData, aes(x = Date, y = Calories))+
  geom_line(color="firebrick") 
p2

png(filename="ladviens_calories.png")
plot(p2)
dev.off()

cat("*******************************************************\n")
cat("* Finished R Script                                   *\n")
cat("*******************************************************\n")
cat("\n")

Lastly, let’s write a bash script to run the Python and R code, then copy the images to Ladvien.com

#!/bin/sh
PASSWORD=("$(keyring get system ladvien.com)")

Python myfitnesspall_scraper.py

Rscript myfitnesspal_data_sort.R

ECHO ""
ECHO "*******************************************************"
ECHO "* Syncing files to Ladvien.com                        *"
ECHO "*******************************************************"
ECHO ""

# Used SSHPass
# https://gist.github.com/arunoda/7790979

sshpass -p "$PASSWORD" scp ladviens_weight.png ladviens_calories.png root@ladvien.com:/usr/share/nginx/html/images

And here’s the result:

My weight:

And my calories:

Next, I’ll probably tweak ggplot2 to make the graphs a little prettier. Also, I’ll setup a Raspberry Pi or something to run the bash script once a night. Why? Lolz.