296 lines
No EOL
118 KiB
JSON
296 lines
No EOL
118 KiB
JSON
[
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "",
|
||
"text": "Recently two things happened quite close together that started me on the journey to this post.\nSo these two things lead me to this, pulling Amtrak data from the web using Julia. I do not claim to be an expert on Julia but I am learning and I wanted to share my journey, nor to I claim to be an expert at Web Scraping. Taking those things in account lets follow along."
|
||
},
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html#load-packages",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html#load-packages",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "Load Packages",
|
||
"text": "Load Packages\nFirst off I will load the Julia packages I am going to use. The first three all have to do with web scraping, and getting the data off the website. CairoMakie will be used to make the plot. All of the rest are for data wrangling. I already have all of these packages in this project environment so I just need to let the Julia REPL know to load them. If you are brand new to Julia this site really helped explain the idea of project environments to me. I also use VSCode along with the Julia extension which does a great job of handling the project environment.\n\nusing HTTP\nusing Gumbo\nusing Cascadia\nusing DataFrames\nusing DataFramesMeta\nusing Dates\nusing Statistics\nusing CategoricalArrays\nusing CairoMakie"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html#setting-up-the-web-scraping",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html#setting-up-the-web-scraping",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "Setting up the Web Scraping",
|
||
"text": "Setting up the Web Scraping\nNow that the packages are loaded, we can start setting up the web scraping. From my internet searching I found that Amtrak does have an API but it is quite challenging to use. I found this website Amtrak Status which does a great job of showing the data I was looking for. In this example I am just going to pull data for two trains, train 97 and train 98. You can see in the link I set those as the train numbers, and if you follow the link you will see it sets it up in a nice table to view the historical data. When then use the HTTP package to get the raw website data and then use Gumbo to parse the HTML into a table. The Cascadia package gives the various CSS selectors to help pull the info I want of the entire page. The page table does not have an ids but it is also the only table on the page. I was able to use the CSS Selector “tr” to get each row of the table into a vector. If we examine the third item in the rows vector we see that it has the information we want (the first two rows are headers for the table)\n\n\nurl = \"https://juckins.net/amtrak_status/archive/html/history.php?train_num=97%2C98&station=&date_start=07%2F01%2F2024&date_end=07%2F31%2F2024\";\nresp = HTTP.get(url);\npage = parsehtml(String(resp.body));\n\nrows = eachmatch(sel\"tr\",page.root);\n\nrows[3]"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html#creating-the-dataframe",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html#creating-the-dataframe",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "Creating the DataFrame",
|
||
"text": "Creating the DataFrame\nNow that each row of the table is stored in a vector we need to rebuild the table into a dataframe in Julia. First I am intializing an empty dataframe by creating each column that will hold data. The column names match those of the header in the table on the website. Then I loop through each item in the rows vector. The text variable is a vector of all the td elements in the row. If the text vector is not empty and has more than one item in it, then we loop through the items and push the text into the row_data vector. Finally we push the row_data vector into the dataframe created prior to the loop. By having the nested if I can remove the footer column at the end of the table from the website. The website table header uses a different CSS selector than the rest of the table but the footer does not. At the end of the loop I now have the same table that is on the website but stored as a dataframe in Julia.\n\n# create empty DataFrame and then populate it with the table from website\ndf = DataFrame(train = String[], origin_date = [], station = String[], sch_dp = [], act_dp = String[], comments = [], s_disrupt = [], cancellations = [])\n\nfor i in rows\n text = eachmatch(Selector(\"td\"), i)\n row_data = []\n if !isempty(text) && length(text) > 1\n for item in text\n push!(row_data, nodeText(item))\n end\n push!(df, row_data)\n end\nend"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html#cleaning-the-dataframe",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html#cleaning-the-dataframe",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "Cleaning the DataFrame",
|
||
"text": "Cleaning the DataFrame\nComing from R I am quite familiar with data cleaning using dpylr and the rest of the tidyverse packages. When looking at options I really liked what the DataFramesMeta package brings, so I have used that here to get the data were I want it. I first filter out any trains that have a service disruption as well as any that are blank in the departure column. Next I select only the station, train, and the comments column. I originally tried using the two departure columns but was having an issue with trains that arrived at the stations on day but then left the next. These were causing the delay to be quite large as it was calculating as if it actually left before arriving. The comments column has what I needed I just had to pull the string out and convert it to a numeric. After selecting the columns I first create the delay column. This pulled the comment string out of the comment column only if it contains Dp: as this indicates how late or early the train left. Next I am pulling out the time in minutes and hours from the delay string and converting those numbers to integers. The total delay column adds the minutes and hours together and if the word late is not in the column it will convert the number to negative. A negative delay in this case means the train left early. Finally I transform the columns to categorical so that they are easier to work with in the future. You can notice that for the last transformation I could not figure out how to select two columns using the transform macro. Also for those coming from R note the .=> this is the broadcast operator and it lets Julia know to perform the action on the entire vector (I think I am explaining this right!) I end the block by showing the first 5 rows of the modified dataframe.\n\n\nmod_df = @chain df begin\n @rsubset :act_dp != \"\" && :s_disrupt != \"SD\"\n @select :train :station :comments\n #can't perform match if there is nothing there\n @rtransform :delay = occursin(r\"Dp:\", :comments) ? match(r\"Dp:.*\", :comments).match : \"\"\n @rtransform :min = occursin(r\"min\", :delay) ? parse(Int,match(r\"([0-9]*) min\", :delay)[1]) : Int(0)\n @rtransform :hour = occursin(r\"hr\", :delay) ? parse(Int,match(r\"([0-9]*) hr\", :delay)[1]) *60 : Int(0)\n @rtransform :total_delay_mins = :min + :hour |> x -> occursin(r\"late\", :delay) ? x : x *-1 #if word late does not appear, train left early\n transform([:station, :train] .=> categorical, renamecols = false)\nend\n\nfirst(mod_df, 5)\n\n5×7 DataFrame\n\n\n\nRow\ntrain\nstation\ncomments\ndelay\nmin\nhour\ntotal_delay_mins\n\n\n\nCat…\nCat…\nAny\nAbstract…\nInt64\nInt64\nInt64\n\n\n\n\n1\n97\nRMT\nDp: 1 min late.\nDp: 1 min late.\n1\n0\n1\n\n\n2\n98\nFLO\nAr: 7 min early. | Dp: On time.\nDp: On time.\n0\n0\n0\n\n\n3\n98\nKTR\nDp: 12 min late.\nDp: 12 min late.\n12\n0\n12\n\n\n4\n97\nPTB\nDp: 6 min late.\nDp: 6 min late.\n6\n0\n6\n\n\n5\n97\nRVR\nAr: 8 min late. | Dp: 5 min late.\nDp: 5 min late.\n5\n0\n5"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html#grouping-and-summarizing",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html#grouping-and-summarizing",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "Grouping and Summarizing",
|
||
"text": "Grouping and Summarizing\nNow that I have the data I want, I want to group and summarize to create some graphs. Again using DataFramesMeta and the by keyword I can group by the train and station columns and then create the mean, median, max, and min columns. This action felt very to summarize in dplyr. DataFramesMeta does allow you to do the grouping and combining as two separate steps, but the by keyword combines in into one step. I then ordered by the station column and then by the train column. I then created a column that shows the difference in the mean delay between the two trains. I didn’t end up using this for now but I might make something with it later. Last I created two columns that contain the level code for the station and train columns. I will talk about the reason for this in the next section. The function levelcode is from the CategoricalArrays package and it creates an integer column that matches the level of the categorical name. Last I display the first 5 rows of the dataframe.\n\ngd = @chain mod_df begin\n @by _ [:train,:station] begin\n :mean = Float32[Statistics.mean(:total_delay_mins)]\n :median = Statistics.median(:total_delay_mins)\n :max = maximum(:total_delay_mins)\n :min = minimum(:total_delay_mins) \n end \n @orderby :station :train\n @groupby :station\n @transform :diff = [missing; diff(:mean)]\n @rtransform _ begin\n :station_code = levelcode(:station)\n :train_code = levelcode(:train)\n end\nend\n\nfirst(gd, 5)\n\n5×9 DataFrame\n\n\n\nRow\ntrain\nstation\nmean\nmedian\nmax\nmin\ndiff\nstation_code\ntrain_code\n\n\n\nCat…\nCat…\nFloat32\nFloat64\nInt64\nInt64\nFloat32?\nInt64\nInt64\n\n\n\n\n1\n97\nALX\n70.4\n50.0\n287\n0\nmissing\n1\n1\n\n\n2\n98\nALX\n101.387\n77.0\n399\n-16\n30.9871\n1\n2\n\n\n3\n97\nBAL\n53.3333\n27.0\n267\n3\nmissing\n2\n1\n\n\n4\n98\nBAL\n120.226\n104.0\n414\n0\n66.8925\n2\n2\n\n\n5\n97\nCHS\n71.1\n53.0\n286\n0\nmissing\n3\n1"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html#plotting",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html#plotting",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "Plotting",
|
||
"text": "Plotting\nComing from R and the ggplot package (also having played around a bit in Plotly for R) there was a rather step learning curve to Makie! I do feel there is a ton of flexibility in Makie, but learning to use it is a beast, and was probably the hardest part of this whole thing. The first challenge was Makie does not like categorical variables (at least for barplots, don’t know if this is always true), thus the need for using the level codes so I could pass a numerical vector to the x axis. I am then able to label that axis with the categorical labels. Makie does also allow you to just call the barplot function without all the other set up, and it will automatically create the figure and axis, however I wanted to do it manually and really build up the graph. First step was setting a color gradient, I used Dark2 from the ColorBrewer schemes, just as a personal preference for one I really like. Next up I create the figure. Directly from the Makie docs, The Figure is the outermost container object. I could pass some arguments to the Figure constructor, and change size or colors, but for this one I just left everything as the defaults. Next up is creating the axis. I placed it at position 1,1 within the previously created figure. I also pass labels for the x and y axis, a title, and then the labels for the xticks. The label roation is in radian so pi/2 rotates the labels 90 degrees. Next I generate the barplot. Not the ! in the function call allows for plotting on an existing axis. (More info on the Bang Operator) Last I set up Labels and Colors for the Legend, and the place the Legend at position 1,2 of the existing figure.\n\ncolors = cgrad(:Dark2_6)\nf = Figure();\nax = Axis(f[1,1], xlabel = \"Station\", ylabel = \"Mean Delay (mins)\", title = \"Mean Delay by Station\", xticks = (1:length(levels(gd.station_code)), levels(gd.station)), xticklabelrotation = pi/2)\nbarplot!(ax, gd.station_code, gd.mean, dodge = gd.train_code, color = colors[gd.train_code]) \n\nlabels = [\"$i\" for i in unique(gd.train)]\nelements = [PolyElement(polycolor = colors[i]) for i in unique(gd.train_code)]\n\nLegend(f[1,2],elements, labels, \"Train Number\")\n\n\nf"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-08-09-learning-Julia/index.html#conclusion",
|
||
"href": "posts/2024-08-09-learning-Julia/index.html#conclusion",
|
||
"title": "Learning Julia by WebScraping Amtrak Data",
|
||
"section": "Conclusion",
|
||
"text": "Conclusion\nThere is still a lot that could be done with this data set, and I am interested to keep playing around with it to see what kind of insights I could gather. Overall I learned a lot about Julia but as I learned with R there is always more to learn! I look forward to see where this journey takes me."
|
||
},
|
||
{
|
||
"objectID": "posts/2023-10-12_DHSC_Capstone/index.html",
|
||
"href": "posts/2023-10-12_DHSC_Capstone/index.html",
|
||
"title": "Reflex Testing using Machine Learning in the Clinical Laboratory",
|
||
"section": "",
|
||
"text": "Full Paper\nTo view the full paper please go to the following link\n\n\nAbstract\nIntroduction: This research study focuses on developing and testing a machine learning algorithm to predict the FT4 result or diagnose hyper or hypothyroidism in clinical chemistry. The goal is to bridge the gap between hard-coded reflex testing and fully manual reflective testing using machine learning algorithms. The significance of this study lies in the increasing healthcare costs, where laboratory services contribute significantly to medical decisions and budgets. By implementing automated reflex testing with machine learning algorithms, unnecessary laboratory tests can be reduced, resulting in cost savings and improved efficiency in the healthcare system.\nMethods: The study was performed using the Medical Information Mart for Intensive Care (MIMIC) database for data collection. The database consists of de-identified health-related data from critical care units. Eighteen variables, including patient demographics and lab values, were selected for the study. The data set was filtered based on specific criteria, and an outcome variable was created to determine if the Free T4 value was diagnostic. The data handling and modeling were performed using R and R Studio. Regression and classification models were screened using a random grid search to tune hyperparameters, and random forest models were selected as the final models based on their performance. The selected hyperparameters for both regression and classification models are specified.\nResults: The study analyzed a dataset of 11,340 observations, randomly splitting it into a training set (9071 observations) and a testing set (2269 observations) based on the Free T4 laboratory diagnostic value stratification. Classification algorithms were used to predict whether Free T4 would be diagnostic, achieving an accuracy of 0.796 and an AUC of 0.918. The model had a sensitivity of 0.632 and a specificity of 0.892. The importance of individual analytes was assessed, with TSH being the most influential variable. The study also evaluated the predictability of Free T4 results using regression, achieving a Root Mean Square Error (RMSE) of 0.334. The predicted results had an accuracy of 0.790, similar to the classification model.\nDiscussion: The study found that the diagnostic value of Free T4 can be accurately predicted 80% of the time using machine learning algorithms. However, the model had limitations in terms of sensitivity, with a false negative rate of 16% for elevated TSH results and 20% for decreased TSH results. The model achieved a specificity of 89% but did not meet the threshold for clinical deployment. The importance of individual analytes was explored, revealing unexpected correlations between TSH and hematology results, which could be valuable for future algorithms. Real-world applications could use predictive models in clinical decision-making systems to determine the need for Free T4 lab tests based on predictions and patient signs and symptoms. However, implementing such algorithms in existing laboratory information systems poses challenges.\n\n\n\n\nReuseCC BY 4.0CitationBibTeX citation:@online{belanger2023,\n author = {Belanger, Kyle},\n title = {Reflex {Testing} Using {Machine} {Learning} in the {Clinical}\n {Laboratory}},\n date = {2023-10-12},\n langid = {en}\n}\nFor attribution, please cite this work as:\nBelanger, Kyle. 2023. “Reflex Testing Using Machine Learning in\nthe Clinical Laboratory.” October 12, 2023."
|
||
},
|
||
{
|
||
"objectID": "posts/2021-02-26_tidytuesday-hbcu-enrollment/tidytuesday-2021-week-6-hbcu-enrolment.html",
|
||
"href": "posts/2021-02-26_tidytuesday-hbcu-enrollment/tidytuesday-2021-week-6-hbcu-enrolment.html",
|
||
"title": "TidyTuesday 2021 Week 6: HBCU Enrollment",
|
||
"section": "",
|
||
"text": "Introduction\nRecently I was struggling to find a data project to work on, I felt a bit stuck with some of my current projects, so I begun to scour the internet to find something to work on. I stumbled upon (TidyTuesday)[https://github.com/rfordatascience/tidytuesday] a weekly project where untidy data is posted from various sources, for the goal of practicing cleaning and visualizing. There is not right or wrong answers for TidyTuesday, this was exactly what I was looking for! This week (well by the time this was posted, a few weeks ago) the data set was about Historically Black Colleges and Universities. Within the posted data there were a few different data sets, I chose to work with the set dealing with High school Graduation rates, throughout this post I will explain my steps for cleaning and then present a few different graphs. It should also be noted that in the first section my code blocks will build upon themselves, so the same code will be duplicated as I add more steps to it.\n\n\nLoad Data\nIn this first block we will load some required libraries as well as load in the raw data. This dataset contains data for Highschool graduation rates by race. One thing to point out here is the use of import::from(), will its use here is a bit overkill, it was more for my practice. In this case I am importing the function %nin from the Hmisc package, which in the opposite of the function %in% from base R.\n\nlibrary(dplyr)\nlibrary(ggplot2)\n\nimport::from(Hmisc, `%nin%`)\n\nhs_students_raw <- readxl::read_xlsx(\"104.10.xlsx\", sheet = 1)\n\nglimpse(hs_students_raw)\n\nRows: 48\nColumns: 19\n$ Total <dbl> 1910…\n$ `Total, percent of all persons age 25 and over` <dbl> 13.5…\n$ `Standard Errors - Total, percent of all persons age 25 and over` <chr> \"(—)…\n$ White1 <chr> \"—\",…\n$ `Standard Errors - White1` <chr> \"(†)…\n$ Black1 <chr> \"—\",…\n$ `Standard Errors - Black1` <chr> \"(†)…\n$ Hispanic <chr> \"—\",…\n$ `Standard Errors - Hispanic` <chr> \"(†)…\n$ `Total - Asian/Pacific Islander` <chr> \"—\",…\n$ `Standard Errors - Total - Asian/Pacific Islander` <chr> \"(†)…\n$ `Asian/Pacific Islander - Asian` <chr> \"—\",…\n$ `Standard Errors - Asian/Pacific Islander - Asian` <chr> \"(†)…\n$ `Asian/Pacific Islander - Pacific Islander` <chr> \"—\",…\n$ `Standard Errors - Asian/Pacific Islander - Pacific Islander` <chr> \"(†)…\n$ `American Indian/\\r\\nAlaska Native` <chr> \"—\",…\n$ `Standard Errors - American Indian/\\r\\nAlaska Native` <chr> \"(†)…\n$ `Two or more race` <chr> \"—\",…\n$ `Standard Errors - Two or more race` <chr> \"(†)…\n\n\nNow we are going to start cleaning the data. First I am going to filter for years 1985 and up, prior to this year the data set is a bit spardic, so to keep it clean I am only going to look at 1985 and up. There are also 3 odd years (19103,19203,19303) that I am not sure what those are so I will remove that data as well.\n\nhs_students <- hs_students_raw %>% \n filter(Total >= 1985) %>% \n filter(Total %nin% c(19103, 19203, 19303))\n\nNext I am going to convert all columns to be numeric, because of some blanks in the original import all of the columns read in as characters instead of numeric.\n\nhs_students <- hs_students_raw %>% \n filter(Total >= 1985) %>% \n filter(Total %nin% c(19103, 19203, 19303)) %>% \n mutate(across(everything(), as.numeric))\n\nNext I am going to rename the columns. First I rename the column Total, into year, as this column holds the year! Then I use stringr::str_remove_all to remove the long phrase ‘percent of all persons age 25 and over’, as well as the number 1. For some reason the Black and White columns each have a number 1 at the end, I think this is for some sort of footnote but we will just remove it.\n\nhs_students <- hs_students_raw %>% \n filter(Total >= 1985) %>% \n filter(Total %nin% c(19103, 19203, 19303)) %>% \n mutate(across(everything(), as.numeric)) %>% \n rename(year = Total) %>% \n rename_with(\n ~stringr::str_remove_all(\n .\n ,\", percent of all persons age 25 and over|1\"\n )\n )\n\nThen I am going to drop the column ‘Total - Asian/Pacific Islander’, each of these races is stored in a seperate column so if I needed the total later for some reason I could calculate it. I am also going to drop the string “Asian/Pacific Islander -”, from the begin of each of those columns, so they will now tell me just which race each column refers too.\n\nhs_students <- hs_students_raw %>% \n filter(Total >= 1985) %>% \n filter(Total %nin% c(19103, 19203, 19303)) %>% \n mutate(across(everything(), as.numeric)) %>% \n rename(year = Total) %>% \n rename_with(\n ~stringr::str_remove_all(\n .\n ,\", percent of all persons age 25 and over|1\"\n )\n ) %>% \n select(-contains(\"Total - Asian/Pacific Islander\")) %>% \n rename_with(\n ~stringr::str_remove_all(\n .\n ,\"Asian/Pacific Islander - \"\n )\n )\n\nI now simply pivot the data longer. A nice trick I learned since I want to pivot everything expect the year column is to use the minus sign to select every column expect the year column in the pivot.\n\nhs_students_long <- hs_students %>% \n tidyr::pivot_longer(-year)\n\nWith the data now in long form I am going to separate the automatically generate name column into two columns titled, stat and race. The data contains both the percent that graduated and the standard error. Then I replace all the NA’s in the stat column with Total, as these are the total percentage and the other rows will be the standard error. Last I dropped the s from standard errors to make it singular.\n\nhs_students_long <- hs_students %>% \n tidyr::pivot_longer(-year) %>% \n tidyr::separate(name, c(\"stat\", \"race\"), sep = \"- \", fill = \"left\") %>% \n tidyr::replace_na(list(stat = \"Total\")) %>% \n mutate(\n across(\n stat\n ,~stringr::str_replace(\n .\n ,\"Standard Errors\"\n ,\"Standard Error\"\n )\n )\n )\n\nI know pivot the date back to wide form, and use the Janitor package to clean the column names. This puts them in lowercase with _ for spaces.\n\nhs_students_wide <- hs_students_long %>% \n tidyr::pivot_wider(names_from = stat, values_from = value) %>% \n janitor::clean_names()\n\nTo make graphing a bit easier with the scales package, I divide both columns by 100. We will see why in the graphs.\n\nhs_students_wide <- hs_students_long %>% \n tidyr::pivot_wider(names_from = stat, values_from = value) %>% \n janitor::clean_names() %>% \n mutate(across(total:standard_error, ~.x/100))\n\nIt’s now time to graph. Notice the use scales::label_percent() as the labels value for the y axis. If the numbers were left as the default values (75 vs 0.75) the percentages would have been 750%, which is obviously very wrong! I also use geom_ribbon to draw the standard error bars around each line. Notice the use of color = NA, by default the ribbon has outlines, I did not like this so doing color = NA turns them off. (It should be noted there are a few other solutions to turning them off but this seemed the easiest to me). Last we see the use of the aesthetics argument in scale_color_brewer. By setting this we match the color and fill to be the same color, without setting this, the colors of the error bars and lines don’t match!\n\nhs_students_wide <- hs_students_wide %>% \n mutate(\n ymax = total - standard_error\n ,ymin = total + standard_error\n )\n\ng1 <- hs_students_wide %>% \n filter(race != \"Total\") %>% \n ggplot(aes(x = year, y = total, group = race, color = race)) +\n geom_ribbon(aes(ymax = ymax, ymin = ymin, fill = race), alpha = 0.3, color = NA) +\n geom_line() +\n scale_x_continuous(breaks = seq(1985,2016,3)) +\n scale_y_continuous(labels = scales::label_percent()) +\n scale_color_brewer(palette = \"Dark2\", aesthetics = c(\"color\", \"fill\")) +\n theme_bw() +\n labs(\n x = NULL\n ,y = NULL\n ,title = glue::glue(\"Percentage of High School Graduates by Race\"\n ,\"\\n\"\n ,\"1985 - 2016\")\n ,color = \"Race\" \n ,fill = \"Race\"\n ) +\n theme(\n plot.title = element_text(hjust = 0.5)\n ,legend.title = element_text(hjust = 0.5)\n )\n \ng1\n\n\n\n\n\n\n\n\n\n\nLoad Male/Female Data\nNow the file also contains the same information but split by male and female. I am going to load in that data.\n\nmale_hs_raw <- readxl::read_excel(\"104.10.xlsx\", sheet = 3)\nfemale_hs_raw <- readxl::read_excel(\"104.10.xlsx\", sheet = 5)\n\nHere I will use the same manipulations as above, the only addition is adding a column for sex.\n\nmale_hs <- male_hs_raw %>% \n filter(Total >= 1985) %>% \n filter(Total %nin% c(19103, 19203, 19303)) %>% \n mutate(across(everything(), as.numeric)) %>% \n rename(year = Total) %>% \n rename_with(\n ~stringr::str_remove_all(\n .\n ,\", percent of all persons age 25 and over|1\"\n )\n ) %>% \n select(-contains(\"Total - Asian/Pacific Islander\")) %>% \n rename_with(\n ~stringr::str_remove_all(\n .\n ,\"Asian/Pacific Islander - \"\n )\n ) %>% \n tidyr::pivot_longer(-year) %>% \n tidyr::separate(name, c(\"stat\", \"race\"), sep = \"- \", fill = \"left\") %>% \n tidyr::replace_na(list(stat = \"Total\")) %>% \n mutate(\n across(\n stat\n ,~stringr::str_replace(\n .\n ,\"Standard Errors\"\n ,\"Standard Error\"\n )\n )\n ,sex = \"Male\"\n )\n\n\nfemale_hs <- female_hs_raw %>% \n filter(Total >= 1985) %>% \n filter(Total %nin% c(19103, 19203, 19303)) %>% \n mutate(across(everything(), as.numeric)) %>% \n rename(year = Total) %>% \n rename_with(\n ~stringr::str_remove_all(\n .\n ,\", percent of all persons age 25 and over|1\"\n )\n ) %>% \n select(-contains(\"Total - Asian/Pacific Islander\")) %>% \n rename_with(\n ~stringr::str_remove_all(\n .\n ,\"Asian/Pacific Islander - \"\n )\n ) %>% \n tidyr::pivot_longer(-year) %>% \n tidyr::separate(name, c(\"stat\", \"race\"), sep = \"- \", fill = \"left\") %>% \n tidyr::replace_na(list(stat = \"Total\")) %>% \n mutate(\n across(\n stat\n ,~stringr::str_replace(\n .\n ,\"Standard Errors\"\n ,\"Standard Error\"\n )\n )\n ,sex = \"Female\"\n )\n\nHere we will combine the two data frames and then pivot to our final graphing form.\n\nmale_female_hs_wide <- male_hs %>% \n bind_rows(female_hs) %>% \n tidyr::pivot_wider(names_from = stat, values_from = value) %>% \n janitor::clean_names() %>% \n mutate(across(total:standard_error, ~.x/100)) %>% \n mutate(\n ymax = total - standard_error\n ,ymin = total + standard_error\n )\n\nLets first graph the total for Male and Female graduation rates.\n\ng2 <- male_female_hs_wide %>% \n filter(race == \"Total\") %>% \n ggplot(aes(x = year, y = total, group = sex, color = sex)) +\n geom_ribbon(aes(ymax = ymax, ymin = ymin, fill = sex), alpha = 0.3, color = NA) +\n geom_line() +\n scale_x_continuous(breaks = seq(1985,2016,3)) +\n scale_y_continuous(labels = scales::label_percent()) +\n scale_color_brewer(palette = \"Dark2\", aesthetics = c(\"color\", \"fill\")) +\n theme_bw() +\n labs(\n x = NULL\n ,y = NULL\n ,title = glue::glue(\"Percentage of High School Graduates by Sex\"\n ,\"\\n\"\n ,\"1985 - 2016\")\n ,color = \"Sex\" \n ,fill = \"Sex\"\n ) +\n theme(\n plot.title = element_text(hjust = 0.5)\n ,legend.title = element_text(hjust = 0.5)\n )\n\ng2\n\n\n\n\n\n\n\n\nNow I am going to graph by Sex and Race.\n\nrace_filter <- c(\"White\", \"Black\", \"Hispanic\")\n\nmake_label <- function(label){\n # browser()\n result <- stringr::str_split(label, \"\\\\.\")\n unlist(lapply(result, function(x) paste(x[2],x[1])))\n}\n\n\ng2 <- male_female_hs_wide %>% \n filter(race %in% race_filter) %>% \n ggplot(aes(x = year, y = total, group = interaction(sex,race), color = interaction(sex,race))) +\n geom_ribbon(aes(ymax = ymax, ymin = ymin, fill = interaction(sex,race)), alpha = 0.3, color = NA) +\n geom_line() +\n scale_x_continuous(breaks = seq(1985,2016,3)) +\n scale_y_continuous(labels = scales::label_percent()) +\n scale_color_brewer(palette = \"Dark2\", aesthetics = c(\"color\", \"fill\"), labels = make_label) +\n theme_bw() +\n labs(\n x = NULL\n ,y = NULL\n ,title = glue::glue(\"Percentage of High School Graduates by Race and Sex\"\n ,\"\\n\"\n ,\"1985 - 2016\")\n ,color = \"Race & Sex\" \n ,fill = \"Race & Sex\"\n ) +\n theme(\n plot.title = element_text(hjust = 0.5)\n ,legend.title = element_text(hjust = 0.5)\n )\n\ng2\n\n\n\n\n\n\n\n\n\n\nConclusion\nWhile I am sure there is much more that could be done with this data this is where I am going to stop for today. Our graphs clearly show a divide in graduation rates by race, however Sex does not seem to have much of an effect on graduation rates.\n\n\n\n\nReuseCC BY 4.0CitationBibTeX citation:@online{belanger2021,\n author = {Belanger, Kyle},\n title = {TidyTuesday 2021 {Week} 6: {HBCU} {Enrollment}},\n date = {2021-02-26},\n langid = {en}\n}\nFor attribution, please cite this work as:\nBelanger, Kyle. 2021. “TidyTuesday 2021 Week 6: HBCU\nEnrollment.” February 26, 2021."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html",
|
||
"href": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html",
|
||
"title": "Diabetes in Rural North Carolina : Data Collection and Cleaning",
|
||
"section": "",
|
||
"text": "This is the second post in the series exploring Diabetes in rural North Carolina. This post will explore the data used for this project, from collection, cleaning, and analysis ready data."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#overall",
|
||
"href": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#overall",
|
||
"title": "Diabetes in Rural North Carolina : Data Collection and Cleaning",
|
||
"section": "Overall",
|
||
"text": "Overall\nOverall there are four data sources that have been used to create the analysis ready data for this project. There is one additional metadata file that contains the list of all county FIP codes, used for linking the various data sets. All data sets use the county FIPS as the county identifier, the county name is added at the end using the metadata. The image below shows the steps taken to achieve the analysis data set, as well as a table below showing the structure of each data set.\n\n\n\n\n\nData Sources\n\n\nData\nStructure\nSource\nNotes\n\n\n\n\n2010 Census Rural/Urban Housing\none row per county\nUS Census\nNA\n\n\nCounty Health Rankings\none row per county, year\nCounty Health Rankings\nRaw data is one year per file\n\n\nPopulation Estimates\none row per county, year, age group\nUS Census\nNA\n\n\nDiabetes Data\none row per county, year\nCDC Diabetes Atlas\nRaw data is one year per file"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#rural-housing",
|
||
"href": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#rural-housing",
|
||
"title": "Diabetes in Rural North Carolina : Data Collection and Cleaning",
|
||
"section": "Rural Housing",
|
||
"text": "Rural Housing\nThe first data set comes from the US Census, and contains the amount of housing units inside both Urban and Rural areas. The raw data was taken and used to calculate the percentage of housing units in rural areas, as well as adding the classifications of Rural, Mostly Rural, and Mostly Urban. More about these classifications can be read here. This data set is from the 2010 US Census, which is then used to set the rural classification until the next Census (2020).\nView greeter script here\n\n\n\nRural Housing Data Set\n\n\nCounty Fips\nPct Rural\nRural\n\n\n\n\n05131\n20.41\nMostly Urban\n\n\n05133\n69.29\nMostly Rural\n\n\n05135\n77.84\nMostly Rural\n\n\n05137\n100.00\nRural\n\n\n05139\n55.07\nMostly Rural\n\n\n05141\n100.00\nRural\n\n\n\nNote: \n\n\n\n\n Displaying 6 of 3,143 rows"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#county-health-rankings",
|
||
"href": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#county-health-rankings",
|
||
"title": "Diabetes in Rural North Carolina : Data Collection and Cleaning",
|
||
"section": "County Health Rankings",
|
||
"text": "County Health Rankings\nThe second data set comes from County Health Rankings and contains data for the risk factors associated with diabetes, this data set is complied from many different data sources. The data was downloaded by year, and then combine to form one data set. County Health Rankings uses this data to rate health outcomes across all counties of the United States, for this analysis four categories have been extracted from the overall data set. Note that the food environment index is itself a combine measure, it is a score of both access to healthy food based on distance to grocery stores, as well as access based on cost.\nView greeter script here\n\n\n\nCounty Health Rankings Sources\n\n\nMeasure\nData Source\nFirst Year Available\n\n\n\n\nAdult smoking\nBehavioral Risk Factor Surveillance System\n2010\n\n\nAdult obesity\nCDC Diabetes Interactive Atlas\n2010\n\n\nPhysical inactivity\nCDC Diabetes Interactive Atlas\n2011\n\n\nFood environment index\nUSDA Food Environment Atlas, Map the Meal Gap\n2014\n\n\n\nSource: \n\n\n\n\n https://www.countyhealthrankings.org/explore-health-rankings/measures-data-sources/2020-measures\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCounty Risk Factors Data Set\n\n\nCounty Fips\nYear\nAdult Smoking Percent\nAdult Obesity Percent\nPhysical Inactivity Percent\nFood Environment Index\n\n\n\n\n01001\n2010\n28.1\n30.0\nNA\nNA\n\n\n01003\n2010\n23.1\n24.5\nNA\nNA\n\n\n01005\n2010\n22.7\n36.4\nNA\nNA\n\n\n01007\n2010\nNA\n31.7\nNA\nNA\n\n\n01009\n2010\n23.4\n31.5\nNA\nNA\n\n\n01011\n2010\nNA\n37.3\nNA\nNA\n\n\n\nNote: \n\n\n\n\n\n\n\n Displaying 6 of 34,555 rows"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#population-estimates",
|
||
"href": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#population-estimates",
|
||
"title": "Diabetes in Rural North Carolina : Data Collection and Cleaning",
|
||
"section": "Population Estimates",
|
||
"text": "Population Estimates\nThe third data set also comes from the US Census and contains population estimates for each county in the United States broken down by: year, age-group, sex, race, and ethnicity. For each row in the table the percent of each type of population was calculated using the yearly population total for the county. This breakdown is useful for this project as African-Americans and Hispanics suffer from diabetes at a higher rate then other groups.\nView greeter script here\n\n\n\n\nUS Population Estimates Data Set\n\n\nCounty Fips\nYear\nAge Group\nYear Total Population\nTotal Male Population\nTotal Female Population\nWhite Male Population\nWhite Female Population\nBlack Male Population\nBlack Female Population\nAmerican Indian Male Population\nAmerican Indian Female Population\nAsian Male Population\nAsian Female Population\nNative Hawaiian Male Population\nNative Hawaiian Female Population\nNot Hispanic Male Population\nNot Hispanic Female Population\nHispanic Male Population\nHispanic Female Population\nPct Hsipanic Female Population\nPct Male\nPct Female\nPct White Male Population\nPct White Female Population\nPct Black Male Population\nPct Black Female Population\nPct American Indian Male Population\nPct American Indian Female Population\nPct Asian Male Population\nPct Asian Female Population\nPct Native Hawaiian Male Population\nPct Native Hawaiian Female Population\nPct not Hispanic Male Population\nPct not Hispanic Female Population\nPct Hispanic Male Population\n\n\n\n\n01001\n2010\n0-4\n54773\n1863\n1712\n1415\n1314\n356\n319\n3\n2\n13\n15\n0\n0\n1778\n1653\n85\n59\n0.11\n3.40\n3.13\n2.58\n2.40\n0.65\n0.58\n0.01\n0.00\n0.02\n0.03\n0.00\n0.00\n3.25\n3.02\n0.16\n\n\n01001\n2010\n5-9\n54773\n1984\n1980\n1506\n1517\n398\n369\n15\n6\n15\n22\n1\n4\n1916\n1908\n68\n72\n0.13\n3.62\n3.61\n2.75\n2.77\n0.73\n0.67\n0.03\n0.01\n0.03\n0.04\n0.00\n0.01\n3.50\n3.48\n0.12\n\n\n01001\n2010\n10-14\n54773\n2163\n2129\n1657\n1621\n427\n409\n13\n13\n23\n19\n4\n1\n2098\n2064\n65\n65\n0.12\n3.95\n3.89\n3.03\n2.96\n0.78\n0.75\n0.02\n0.02\n0.04\n0.03\n0.01\n0.00\n3.83\n3.77\n0.12\n\n\n01001\n2010\n15-19\n54773\n2182\n2047\n1601\n1551\n497\n426\n13\n6\n25\n16\n4\n2\n2125\n1996\n57\n51\n0.09\n3.98\n3.74\n2.92\n2.83\n0.91\n0.78\n0.02\n0.01\n0.05\n0.03\n0.01\n0.00\n3.88\n3.64\n0.10\n\n\n01001\n2010\n20-24\n54773\n1573\n1579\n1223\n1219\n306\n316\n6\n7\n6\n7\n3\n2\n1511\n1537\n62\n42\n0.08\n2.87\n2.88\n2.23\n2.23\n0.56\n0.58\n0.01\n0.01\n0.01\n0.01\n0.01\n0.00\n2.76\n2.81\n0.11\n\n\n01001\n2010\n25-29\n54773\n1574\n1617\n1251\n1235\n289\n341\n1\n4\n9\n23\n6\n3\n1505\n1570\n69\n47\n0.09\n2.87\n2.95\n2.28\n2.25\n0.53\n0.62\n0.00\n0.01\n0.02\n0.04\n0.01\n0.01\n2.75\n2.87\n0.13\n\n\n\nNote: \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Displaying 6 of 565560 rows"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#diabetes-percentages",
|
||
"href": "posts/2020-07-25_diabetes-data-collection-and-cleaning/diabetes-in-rural-north-carolina-data-collection-and-cleaning.html#diabetes-percentages",
|
||
"title": "Diabetes in Rural North Carolina : Data Collection and Cleaning",
|
||
"section": "Diabetes Percentages",
|
||
"text": "Diabetes Percentages\nThe final data set comes from the CDC Diabetes Atlas and contains the estimated prevalence of diabetes in each county of the United States, by year. The data set also includes the upper and lower estimated limits, see the previous post for an explanation of how these numbers are calculated. The data was downloaded by year, and then merged into one data set for the project.\nView greeter script here\n\n\n\nUS Diabetes Data\n\n\nYear\nCounty Fips\nDiabetes Percentage\nDiabetes Lower Limit\nDiabetes Upper Limit\n\n\n\n\n2010\n01001\n11.2\n8.8\n13.9\n\n\n2010\n01003\n10.2\n8.7\n11.9\n\n\n2010\n01005\n13.0\n10.6\n15.9\n\n\n2010\n01007\n10.6\n8.2\n13.3\n\n\n2010\n01009\n12.6\n9.8\n15.7\n\n\n2010\n01011\n16.1\n12.4\n20.4"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-06-22_excel-data-multiple-headers/importing-excel-data-with-multiple-header-rows.html",
|
||
"href": "posts/2020-06-22_excel-data-multiple-headers/importing-excel-data-with-multiple-header-rows.html",
|
||
"title": "Importing Excel Data with Multiple Header Rows",
|
||
"section": "",
|
||
"text": "Problem\nRecently I tried to important some Microsoft Excel data into R, and ran into an issue were the data actually had two different header rows. The top row listed a group, and then the second row listed a category within that group. Searching goggle I couldn’t really find a good example of what I was looking for, so I am putting it here in hopes of helping someone else!\n\n\nExample Data\nI have created a small Excel file to demonstrate what I am talking about. Download it here. This is the data from Excel. \n\n\nCheck Data\nFirst we will read the file in using the package readxl and view the data without doing anything special to it.\n\nlibrary(readxl) # load the readxl library\nlibrary(tidyverse) # load the tidyverse for manipulating the data\nfile_path <- \"example_data.xlsx\" # set the file path\nds0 <- read_excel(file_path) # read the file\nds0\n\n# A tibble: 7 × 7\n Name `Test 1` ...3 ...4 `Test 2` ...6 ...7 \n <chr> <chr> <chr> <chr> <chr> <chr> <chr>\n1 <NA> Run 1 Run 2 Run 3 Run 1 Run 2 Run 3\n2 Max 22 23 24 25 26 27 \n3 Phoebe 34 34 32 34 51 12 \n4 Scamp 35 36 21 22 23 24 \n5 Chance 1234 1235 1236 1267 173 1233 \n6 Aimee 420 123 690 42 45 12 \n7 Kyle 22 23 25 26 67 54 \n\n\n\n\nNew Header Names\n\nStep 1\nFirst lets read back the data, this time however with some options. We will set the n_max equal to 2, to only read the first two rows, and set col_names to FALSE so we do not read the first row as headers.\n\nds1 <- read_excel(file_path, n_max = 2, col_names = FALSE)\nds1\n\n# A tibble: 2 × 7\n ...1 ...2 ...3 ...4 ...5 ...6 ...7 \n <chr> <chr> <chr> <chr> <chr> <chr> <chr>\n1 Name Test 1 <NA> <NA> Test 2 <NA> <NA> \n2 <NA> Run 1 Run 2 Run 3 Run 1 Run 2 Run 3\n\n\n\n\nStep 2\nNow that we have our headers lets first transpose them to a vertical matrix using the base function t(), then we will turn it back into a tibble to allow us to use tidyr fill function.\n\nnames <- ds1 %>%\n t() %>% #transpose to a matrix\n as_tibble() #back to tibble\nnames\n\n# A tibble: 7 × 2\n V1 V2 \n <chr> <chr>\n1 Name <NA> \n2 Test 1 Run 1\n3 <NA> Run 2\n4 <NA> Run 3\n5 Test 2 Run 1\n6 <NA> Run 2\n7 <NA> Run 3\n\n\nNote that tidyr fill can not work row wise, thus the need to flip the tibble so it is long vs wide.\n\n\nStep 3\nNow we use tidyr fill function to fill the NA’s with whatever value it finds above.\n\nnames <- names %>% fill(V1) #use dplyr fill to fill in the NA's\nnames\n\n# A tibble: 7 × 2\n V1 V2 \n <chr> <chr>\n1 Name <NA> \n2 Test 1 Run 1\n3 Test 1 Run 2\n4 Test 1 Run 3\n5 Test 2 Run 1\n6 Test 2 Run 2\n7 Test 2 Run 3\n\n\n\n\nStep 4\nThis is where my data differed from many of the examples I could find online. Because the second row is also a header we can not just get rid of them. We can solve this using paste() combined with dplyr mutate to form a new column that combines the first and second column.\n\nnames <- names %>%\n mutate(\n new_names = paste(V1,V2, sep = \"_\")\n )\nnames\n\n# A tibble: 7 × 3\n V1 V2 new_names \n <chr> <chr> <chr> \n1 Name <NA> Name_NA \n2 Test 1 Run 1 Test 1_Run 1\n3 Test 1 Run 2 Test 1_Run 2\n4 Test 1 Run 3 Test 1_Run 3\n5 Test 2 Run 1 Test 2_Run 1\n6 Test 2 Run 2 Test 2_Run 2\n7 Test 2 Run 3 Test 2_Run 3\n\n\n\n\nStep 4a\nOne more small clean up task, in the example data the first column header Name, did not have a second label, this has created a name with an NA attached. We can use stringr to remove this NA.\n\nnames <- names %>% mutate(across(new_names, ~str_remove_all(.,\"_NA\")))\nnames\n\n# A tibble: 7 × 3\n V1 V2 new_names \n <chr> <chr> <chr> \n1 Name <NA> Name \n2 Test 1 Run 1 Test 1_Run 1\n3 Test 1 Run 2 Test 1_Run 2\n4 Test 1 Run 3 Test 1_Run 3\n5 Test 2 Run 1 Test 2_Run 1\n6 Test 2 Run 2 Test 2_Run 2\n7 Test 2 Run 3 Test 2_Run 3\n\n\n\n\nStep 5\nNow that are new name column is the way we want it, we can use dpylrs pull to return a vector of just that column\n\nnames <- names %>% pull(new_names)\n\n\n\n\nFinal Data\nNow that we have a vector of column names lets read in the original file using our new names. We set the skip argument to 2, to skip the first two rows, and set col_names equal to our vector of names. Note the last step I used the janitor package to provide names in snake case (the default for the clean names function.)\n\nexample_data <- readxl::read_excel(file_path, col_names = names, skip = 2) %>%\n janitor::clean_names()\nexample_data\n\n# A tibble: 6 × 7\n name test_1_run_1 test_1_run_2 test_1_run_3 test_2_run_1 test_2_run_2\n <chr> <dbl> <dbl> <dbl> <dbl> <dbl>\n1 Max 22 23 24 25 26\n2 Phoebe 34 34 32 34 51\n3 Scamp 35 36 21 22 23\n4 Chance 1234 1235 1236 1267 173\n5 Aimee 420 123 690 42 45\n6 Kyle 22 23 25 26 67\n# ℹ 1 more variable: test_2_run_3 <dbl>\n\n\n\n\nOther Help\nWhile searching for some solutions to my problem I found two good examples, however neither did exactly what I was trying to do.\n\nThis post by Lisa Deburine is pretty close to what I was trying to accomplish and gave me a good starting point. Read it here\nThis post by Alison Hill solves a simlar but slightly different problem. In her data the 2nd row is actually metadata not a second set of headers. Read it here\n\n\n\n\n\nReuseCC BY 4.0CitationBibTeX citation:@online{belanger2020,\n author = {Belanger, Kyle},\n title = {Importing {Excel} {Data} with {Multiple} {Header} {Rows}},\n date = {2020-06-22},\n langid = {en}\n}\nFor attribution, please cite this work as:\nBelanger, Kyle. 2020. “Importing Excel Data with Multiple Header\nRows.” June 22, 2020."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html",
|
||
"href": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html",
|
||
"title": "Line Graphs and Interactivity",
|
||
"section": "",
|
||
"text": "Today’s post is all about line graphs using both ggplot for a static graph as well as a package called plotly for interactivity (more on this later). The example graph and data is again coming from Tableau for Healthcare, Chapter 10."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#load-libraries",
|
||
"href": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#load-libraries",
|
||
"title": "Line Graphs and Interactivity",
|
||
"section": "Load Libraries",
|
||
"text": "Load Libraries\nAs always first step is to load in our libraries, I am using quite a few here, some are a bit overkill for this example but I wanted to play around with some fun features today.\n\nlibrary(magrittr) #pipes\nlibrary(ggplot2) #ploting \nlibrary(dplyr) # data manipulation\nlibrary(tidyr) # tidy data\nlibrary(lubridate) #work with dates\nlibrary(stringr) # manipulate strings\nlibrary(plotly)"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#import-data",
|
||
"href": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#import-data",
|
||
"title": "Line Graphs and Interactivity",
|
||
"section": "Import Data",
|
||
"text": "Import Data\nNext lets import our data, this week we are using the sheet Flu Occurrence FY2013-2016. I am unsure if this is form a real data set or not but it is good for demonstration purposes! After importing we can glimpse at our data to understand what is contained within.\n\nds <- readxl::read_xlsx(path = \"../2020-01-04_my-start-to-r/Tableau 10 Training Practice Data.xlsx\"\n ,sheet = \"05 - Flu Occurrence FY2013-2016\"\n )\nds %>% glimpse()\n\nRows: 48\nColumns: 4\n$ Date <dttm> 2012-10-27, 2012-11-24, …\n$ `Tests (+) for Influenza (count)` <dbl> 995, 3228, 22368, 24615, …\n$ `Total Respiratory Specimens Tested (count)` <dbl> 18986, 24757, 66683, 7561…\n$ `% Tests (+) for Influenza` <dbl> 0.05240704, 0.13038737, 0…"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#transform-data",
|
||
"href": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#transform-data",
|
||
"title": "Line Graphs and Interactivity",
|
||
"section": "Transform Data",
|
||
"text": "Transform Data\nI went a bit overboard today with renaming the variables. I wanted to practice writing a function and while it might not be the prettiest or the best way to do this, it worked for what I was trying to accomplish. Note the use of sapply, which lets us run the function on each column name.\n\nformat_names <- function(x) {\n #Fucntion to set all names to lower case, and strip unneeded characters\n x <- tolower(x)\n x <- str_replace_all(x,c(#set each pattern equal to replacement\n \" \" = \"_\"\n ,\"\\\\(\\\\+\\\\)\" = \"pos\" #regualr experssion to match (+)\n ,\"\\\\(\" = \"\"\n ,\"\\\\)\" = \"\"\n ,\"\\\\%\" = \"pct\"\n )\n ) \n }\n\n#run the format name function on all names from DS\ncolnames(ds) <- sapply(colnames(ds),format_names) \n\nNow is were the fun really starts! For this particular data set there are a couple things we need to add to replicate the example. In the original data set the date is stored with month, day, and year; the day is irrelevant and we need to pull out the month as well as the year. For this we can use the lubridate package, first we pull out the month and set it as a factor. For this example our year actually starts in October, so we set our factor to start at October (10), and end with September (9). We then pull out the year, which presents us with a different problem. Again our year starts in October, instead of January. To solve this I have created a variable called date adjustment, in this column is our month is 10 or greater, we will place a 1, if not a 0. We then set our fiscal year to be the actual year plus the date adjustment, this allows us to have our dates in the right fiscal year. Last the percent column is currently listed as a decimal, so we will convert this to a percentage.\n\n# split date time\nds1 <- ds %>% mutate(\n #create month column, then set factors and labels to start fiscal year in Oct\n month = month(ds$date)\n ,month = factor(month\n ,levels = c(10:12, 1:9)\n ,labels = c(month.abb[10:12],month.abb[1:9]))\n ,year = year(ds$date)\n ,date_adjustment = ifelse(month(ds$date) >= 10, 1,0 )\n ,fiscal_year = factor(year + date_adjustment)\n #convert % Pos from decmial to pct\n ,pct_tests_pos_for_influenza = round(pct_tests_pos_for_influenza * 100, digits = 0)\n )\n\nds1 %>% glimpse()\n\nRows: 48\nColumns: 8\n$ date <dttm> 2012-10-27, 2012-11-24, 2012…\n$ tests_pos_for_influenza_count <dbl> 995, 3228, 22368, 24615, 1179…\n$ total_respiratory_specimens_tested_count <dbl> 18986, 24757, 66683, 75614, 5…\n$ pct_tests_pos_for_influenza <dbl> 5, 13, 34, 33, 23, 17, 11, 6,…\n$ month <fct> Oct, Nov, Dec, Jan, Feb, Mar,…\n$ year <dbl> 2012, 2012, 2012, 2013, 2013,…\n$ date_adjustment <dbl> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,…\n$ fiscal_year <fct> 2013, 2013, 2013, 2013, 2013,…"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#ggplot",
|
||
"href": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#ggplot",
|
||
"title": "Line Graphs and Interactivity",
|
||
"section": "GGplot",
|
||
"text": "GGplot\nThe graph here is pretty straight forward with one exception, group! For this line graph we want ggplot to connect the lines of the same year, if we do not explicitly state this using the group mapping, ggplot will try to connect all the lines together, which of course is not at all what we want!\n\ng1 <- ds1 %>% \n ggplot(aes(x = month, y = pct_tests_pos_for_influenza, color = fiscal_year\n ,group = fiscal_year)) +\n geom_line() +\n labs(\n x = NULL\n ,y = \"% Tests (+) for Influenza\"\n ,color = NULL\n ,title = \"Flu Viral Surveillance: % Respiratory Specimens Positive for Influenza \\nOctober - September \\nFor Flu Seasons 2013 - 2016\"\n ) +\n theme_classic() +\n scale_y_continuous(breaks = seq(0,40,5)) +\n scale_color_manual(values = c(\"#a6611a\",\"#dfc27d\",\"#80cdc1\",\"#018571\"))\n\ng1"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#plotly",
|
||
"href": "posts/2020-02-10_line-graphs-and-interactivity/line-graphs-and-interactivity.html#plotly",
|
||
"title": "Line Graphs and Interactivity",
|
||
"section": "plotly",
|
||
"text": "plotly\nOne of the nice features of Tableau is the fact the graphs are interactive, while a good graph should speak for itself, end users love pretty things. I have been experimenting with Plotly, which has an open source package for R (as well as many other programming languages!). This example only just scratches the surface, but there will be many more to come!\n\ng2 <- ds1 %>% \n plot_ly(x = ~month, y = ~pct_tests_pos_for_influenza, type = \"scatter\", mode = \"lines\" \n ,color = ~fiscal_year\n ,colors = c(\"#a6611a\",\"#dfc27d\",\"#80cdc1\",\"#018571\")\n , hoverinfo = 'y') %>% \n layout(xaxis = list(\n title = \"\"\n )\n ,yaxis = list(\n title = \"% Tests (+) for Influenza\"\n )\n ,title = \"Flu Viral Surveillance: % Respiratory Specimens Positive for Influenza\"\n ,legend = list(\n x = 100\n ,y = 0.5\n ) \n \n )\n\ng2"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-01-04_my-start-to-r/my-start-to-r.html",
|
||
"href": "posts/2020-01-04_my-start-to-r/my-start-to-r.html",
|
||
"title": "My Start to R",
|
||
"section": "",
|
||
"text": "Today starts my attempt at sharing my R journey with the world! I have been learning R off and on now since late 2019, I have begun to take it much more serious as I work through my Data Analytics class at UCF. My love for all things numbers and graphs has really blossomed, and I am choosing to share that love with anyone who cares to read. I will not claim to be the best at R, or any programming for that matter, but these are my attempts. Each post in this serious will be replicated a graph created in Tableau from the book Tableau for Healthcare. Todays graph is a simple horizontal bar chart, in transferring to both a new blog site and computer I have unfortunately lost the original bar graph, but trust me the one I created looks just like it.\n\nLoad Libraries\n\nlibrary(tidyr)\nlibrary(magrittr)\nlibrary(ggplot2)\nlibrary(stringr)\nlibrary(dplyr)\n\n\n\nImport Data\n\nds <- readxl::read_excel(\n path = \"Tableau 10 Training Practice Data.xlsx\" \n ,sheet = \"02 - Patient Falls-Single Hosp\"\n )\n\n\n\nClean Data Names\n\n#should make reusable forumla at later time\nnames(ds) <- tolower(names(ds))\nnames(ds) <- str_replace_all(names(ds),\" \", \"_\")\n\n\n\nConvert Data to ‘Long Form’\n\nds1 <- ds %>% \n gather(\"patient_falls_no_injury_rate\" , \"patient_falls_with_injury_rate\"\n ,key = \"injury\" \n ,value = \"rate\" ) %>% \n mutate(injury = (injury == \"patient_falls_with_injury_rate\"))\n\n\n\nGraph 5.1\n\nb1 <- ds %>% \n ggplot(mapping = aes(x = reorder(type_of_care,total_patient_falls_rate ) , y = total_patient_falls_rate)) +\n geom_col(fill = \"#2b83ba\") + \n coord_flip() +\n scale_y_continuous(breaks = NULL) +\n theme(axis.ticks = element_blank()) +\n labs(title = \"Rate of Patient Falls (per 1,000 Pateint Days)\\nby Type of Care for FY2017\"\n ,x = NULL\n ,y = NULL\n ) +\n theme_classic() +\n geom_text(aes(label = format(total_patient_falls_rate, digits = 2)), nudge_y = -.25, color = \"white\")\n \nb1\n\n\n\n\n\n\n\n\n\n\n\n\nReuseCC BY 4.0CitationBibTeX citation:@online{belanger2020,\n author = {Belanger, Kyle},\n title = {My {Start} to {R}},\n date = {2020-01-24},\n langid = {en}\n}\nFor attribution, please cite this work as:\nBelanger, Kyle. 2020. “My Start to R.” January 24, 2020."
|
||
},
|
||
{
|
||
"objectID": "blog.html",
|
||
"href": "blog.html",
|
||
"title": "Posts",
|
||
"section": "",
|
||
"text": "Learning Julia by WebScraping Amtrak Data\n\n\n\n\n\n\nJulia\n\n\ndataViz\n\n\n\n\n\n\n\n\n\nAug 27, 2024\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nDoes a US Born Players Birthdate affect their shot at the NHL\n\n\n\n\n\n\ntidytuesday\n\n\nR\n\n\ndataViz\n\n\n\nInspired by TidyTuesday Week 2 - 2024 dataset about Candian Players, lets look at the same anaylyis for American Born Players\n\n\n\n\n\nJun 8, 2024\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nReflex Testing using Machine Learning in the Clinical Laboratory\n\n\nThis post contains the abstract of my Capstone for the Doctorate of Health Science program at Campbell University. \n\n\n\n\n\n\n\n\nOct 12, 2023\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nTidyTuesday 2021 Week 6: HBCU Enrollment\n\n\nTidyTuesday 2021 Week 6: HBCU Enrollment. Posts looks at tidying the data ,as well as making some graphs about the data. \n\n\n\nTidyTuesday\n\n\n\n\n\n\n\n\n\nFeb 26, 2021\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nConverting From Blogdown to Distill\n\n\nA meta post on transferring from a blogdown to distill blog site \n\n\n\nDistill\n\n\n\n\n\n\n\n\n\nJan 12, 2021\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nDiabetes in Rural North Carolina : Data Collection and Cleaning\n\n\nThis is the second post in the series exploring Diabetes in rural North Carolina. This post will explore the data used for this project, from collection, cleaning, and analysis ready data. \n\n\n\n\n\n\n\n\nJul 25, 2020\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nDiabetes in Rural North Carolina : Exploring Prevalence Trends\n\n\nThis post introduces the exploration of the Diabetes epidemic in North Carolina \n\n\n\n\n\n\n\n\nJun 25, 2020\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nImporting Excel Data with Multiple Header Rows\n\n\nA solution for importing Excel Data that contains two header rows. \n\n\n\n\n\n\n\n\nJun 22, 2020\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nBasic Exploration of WHO Tuberculosis Data\n\n\nToday I am going to dive into some real life data from the World Health Organization (WHO), exploring new and relapse cases of Tuberculosis. I clean up the data, and then make a few graphs to explore different variables. \n\n\n\n\n\n\n\n\nFeb 13, 2020\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nLine Graphs and Interactivity\n\n\nTableau for Healthcare Chapter 10. Static and Interactive examples \n\n\n\n\n\n\n\n\nFeb 10, 2020\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nFacets and a Lesson in Humility\n\n\nA look at Tableau for Healthcare Chapter 8. Table Lens graph. \n\n\n\n\n\n\n\n\nJan 29, 2020\n\n\nKyle Belanger\n\n\n\n\n\n\n\n\n\n\n\n\nMy Start to R\n\n\nA short introduction to my blog, and R journey. \n\n\n\n\n\n\n\n\nJan 24, 2020\n\n\nKyle Belanger\n\n\n\n\n\n\nNo matching items"
|
||
},
|
||
{
|
||
"objectID": "index.html",
|
||
"href": "index.html",
|
||
"title": "About",
|
||
"section": "",
|
||
"text": "I am a highly accomplished Medical Technologist with an extensive 14-year track record in the medical industry, consistently demonstrating the ability to effectively bridge the divide between medical professionals and information technologists. Proficient in the application of machine learning techniques to enhance medical data analysis and adept at developing innovative R Shiny apps to streamline healthcare processes and improve patient outcomes."
|
||
},
|
||
{
|
||
"objectID": "index.html#bio",
|
||
"href": "index.html#bio",
|
||
"title": "About",
|
||
"section": "",
|
||
"text": "I am a highly accomplished Medical Technologist with an extensive 14-year track record in the medical industry, consistently demonstrating the ability to effectively bridge the divide between medical professionals and information technologists. Proficient in the application of machine learning techniques to enhance medical data analysis and adept at developing innovative R Shiny apps to streamline healthcare processes and improve patient outcomes."
|
||
},
|
||
{
|
||
"objectID": "index.html#education",
|
||
"href": "index.html#education",
|
||
"title": "About",
|
||
"section": "Education",
|
||
"text": "Education\nCampbell University | Buies Creek, NC\nDoctorate of Health Sciences | August 2020 - May 2023\nUniversity of Central Florida | Orlando, FL\nM.S. in Healthcare Informatics | August 2018 - May 2020\nWestern Carolina University | Cullowhee, NC\nB.S. in Clinical Laboratory Science | August 2005 - May 2009"
|
||
},
|
||
{
|
||
"objectID": "index.html#experience",
|
||
"href": "index.html#experience",
|
||
"title": "About",
|
||
"section": "Experience",
|
||
"text": "Experience\nRoche Diagnositcs | IT Workflow Consultant | Oct 2021 - Present\nRoche Diagnostics | Field Application Specialist | July 2012 - Sept 2021\nCape Fear Valley Hospital | Lead Medical Laboratory Scientist | June 2011 - July 2012\nCape Fear Valley Hospital | Medical Laboratory Scientist | June 2009 - June 2011"
|
||
},
|
||
{
|
||
"objectID": "posts/2020-01-29_facets-and-humility/facets-and-a-lesson-in-humility.html",
|
||
"href": "posts/2020-01-29_facets-and-humility/facets-and-a-lesson-in-humility.html",
|
||
"title": "Facets and a Lesson in Humility",
|
||
"section": "",
|
||
"text": "Todays post is a lesson in Facets, as well as humility. The task this week was to replicate the graph in Chapter 8 of Tableau for Healthcare in R. The graph in question is called a Table Lens (This is the name the book uses, however I did have trouble finding this name in Google searches), it is a collection of charts with a common theme, this time looking at countries in various WHO regions and some statistics associated with mortality as well as health expenditure. I say this is a lesson in humiltiy as I have read through the excellent book R for Data Science, and yet the idea of faceting a ggplot graph slipped my mind. This ended with hours of trying to find a package in R to line up graphs, and way more time then I care to admit spent on getting things prefect. I did find such a package called cowplots, which can be found here. While this is an excellent package, its use was unecessary and I reverted back to using the excellent facet feature of GGplot, which can be seen below! \n\nLoad Libraries\n\nlibrary(magrittr) #pipes\nlibrary(ggplot2) #ploting \nlibrary(dplyr)\nlibrary(tidyr)\n\n\n\nImport Data\n\nds <- readxl::read_xlsx(path = \"../2020-01-04_my-start-to-r/Tableau 10 Training Practice Data.xlsx\"\n ,sheet = \"03 - WHO Life Expect & Mort\"\n )\n\n\n\nClean Names and Transform\n\nvarnames <- c(\"who_region\", \"country\", \"year\" , \"sex\" , \"life_expect_birth\" , \"neo_mort\"\n ,\"under_five_mort\" , \"health_expenditure\")\nnames(ds) <- varnames\n\n# Order Countries based on Life Expectancy at Birth\n\nds$country <- factor(ds$country, levels = ds$country[order(ds$life_expect_birth)]) \n\n#To \"Long\" Form\n\nds1 <- ds %>% pivot_longer(5:8)#select columns 5 throuh 8, leave new columns at default names\n\n# Set up labels for Facet, as well as function for Facet Labeller\n\nfacet_labels <- list(\n\"life_expect_birth\" = \"Life Expectancy at Birth \" \n,\"neo_mort\" = \"Neonatal Mortality Rate\" \n,\"under_five_mort\" = \"Under-Five Mortality Rate\"\n,\"health_expenditure\" = \"Health Expenditure per Capita (US$)\" )\n\nvariable_labeller <- function(variable,value){\n return(facet_labels[value])\n}\n\n\n\nGraphs\n\nhightlight_countries <- (c(\"Mauritania\", \"South Africa\")) \n\ng1 <- ds1 %>% filter(who_region == \"Africa\") %>% \n mutate(name = factor(name, levels = c(\"life_expect_birth\" , \"neo_mort\"\n ,\"under_five_mort\" , \"health_expenditure\"))\n ,highlight = country %in% hightlight_countries) %>% \n ggplot(aes(x = country, y = value, fill = highlight)) +\n geom_col(show.legend = FALSE) +\n coord_flip() +\n labs(\n title = \"World Bank Life Expectancy, Neonatal & Under-Five Mortality Rates, and Health Expenditure Analysis\"\n ,x = NULL\n ,y = NULL\n ) +\n facet_grid(~name, scales = \"free_x\",labeller = variable_labeller) +\n theme_bw() +\n geom_text(aes(label = round(value, 0)), hjust = 0) +\n scale_y_continuous(expand = expand_scale(mult = c(0,0.2))) +\n scale_fill_manual(values = c(\"TRUE\" = \"#fc8d59\", \"FALSE\" = \"#2b83ba\"))\ng1\n\n\n\n\n\n\n\n\n\n\n\n\nReuseCC BY 4.0CitationBibTeX citation:@online{belanger2020,\n author = {Belanger, Kyle},\n title = {Facets and a {Lesson} in {Humility}},\n date = {2020-01-29},\n langid = {en}\n}\nFor attribution, please cite this work as:\nBelanger, Kyle. 2020. “Facets and a Lesson in Humility.”\nJanuary 29, 2020."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-13_basic-who-TB-data/basic-exploration-of-who-tuberculosis-data.html",
|
||
"href": "posts/2020-02-13_basic-who-TB-data/basic-exploration-of-who-tuberculosis-data.html",
|
||
"title": "Basic Exploration of WHO Tuberculosis Data",
|
||
"section": "",
|
||
"text": "Today I am going to dive into some real life data from the World Health Organization (WHO), exploring new and relapse cases of Tuberculosis. I clean up the data, and then make a few graphs to explore different variables."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-13_basic-who-TB-data/basic-exploration-of-who-tuberculosis-data.html#a-different-way-to-look",
|
||
"href": "posts/2020-02-13_basic-who-TB-data/basic-exploration-of-who-tuberculosis-data.html#a-different-way-to-look",
|
||
"title": "Basic Exploration of WHO Tuberculosis Data",
|
||
"section": "A different way to look",
|
||
"text": "A different way to look\nCould there be any correlation between a countries population and the amount of TB cases? Maybe its just as simple as having more people means more people to get sick? Lets bring in another data set, again from World Bank Found Here, this contains total population data by country.\n\npop_raw <- read.csv(\"API_SP.POP.TOTL_DS2_en_csv_v2_713131.csv\"\n ,skip = 4)\n#If this looks famialer its because it is, the data set looks very simalar to the GDP data\n#In the future this could be moved to a function to allow cleaning much easier\npop1 <- pop_raw %>% \n select(-(Indicator.Name:X2012)\n ,-X2019\n ,-X) %>% \n pivot_longer(cols = X2013:X2018\n ,names_to = \"year\" \n ,values_to = \"population\") %>% \n mutate_if(is.character\n ,str_remove_all\n ,pattern = \"X(?=\\\\d*)\")\n\n#now lets combine this into are overall data set\n\nwho_combined <- who_combined %>% \n mutate(year = as.character(year)) %>% \n left_join(y = pop1) %>% \n select(-Country.Name)\n\n#now lets Graph again\n\ng3 <- who_combined %>% \n filter(str_detect(age,\"014|15plus|u\"),year == 2018) %>% \n group_by(country) %>% \n summarise(sum_tb_cases = (sum(values,na.rm = TRUE)/10000)\n ,population = first(population)/1000000\n ,who_region = first(g_whoregion)) %>% \n mutate(\n label = ifelse((population>250), yes = as.character(country),no = \"\")) %>%\n ggplot(aes(x = population, y = sum_tb_cases )) +\n geom_point(aes(color = who_region)) +\n ggrepel::geom_text_repel(aes(x = population, y = sum_tb_cases, label = label)) +\n labs(\n title = \"Total TB Cases by Country compared to Gross Domestic Product (GDP)\"\n ,x = \"Population (in Millions)\"\n ,y = \"Total TB Case (per 10,000 cases)\"\n ,color = \"WHO Region\"\n ) +\n theme_bw() \n\n g3 \n\n\n\n\n\n\n\n\n\nFurther Exploration\nMaybe we are on to something, the more people, the more likely they are to get sick! However India seems to have a very large number of cases so lets break these cases down further by age group for 2018.\n\ng4 <- who_combined %>% \n filter(year == 2018\n ,country == \"India\"\n ,!(str_detect(age,\"15plus|ageunk|u|014\"))\n ,(str_detect(sex,\"m|f\"))\n ) %>% \n mutate(age_range = glue::glue(\"{age_start} -- {age_end}\")) %>% \n ggplot(aes(x = reorder(age_range, as.numeric(age_start)), y = (values/1000), fill = sex)) +\n geom_col(position = \"dodge\") +\n labs(\n title = \"TB Case in India by age and gender 2018\"\n ,x = NULL\n ,y = \"Total Cases (per 1000)\"\n ,fill = \"Gender\") +\n scale_fill_manual(labels = c(\"Female\",\"Male\"), values = c(\"#e9a3c9\",\"#67a9cf\") )\n \ng4\n\n\n\n\n\n\n\n\nThere seems to be a huge spike in cases after adolescences. Females have a sharp decline the older they get, where as male case stay elevated with a slight decrease at 55."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-02-13_basic-who-TB-data/basic-exploration-of-who-tuberculosis-data.html#last-exploration",
|
||
"href": "posts/2020-02-13_basic-who-TB-data/basic-exploration-of-who-tuberculosis-data.html#last-exploration",
|
||
"title": "Basic Exploration of WHO Tuberculosis Data",
|
||
"section": "Last Exploration",
|
||
"text": "Last Exploration\nLets look at overall cases in India, going back to 1980 and see if there as been any trends. To get these numbers we will go back to our raw data and strip everything out expect the total count\n\ng5 <- who_raw %>% \n filter(country == \"India\") %>% \n select(year, c_newinc) %>% \n ggplot(aes(x = year, y = c_newinc/1000000)) +\n geom_line() +\n geom_point() +\n labs(\n title = \"New and Relapse Tuberculosis Cases In India \\n1980 -- 2018\"\n ,x = NULL\n ,y = \"Total Cases (in millions)\") +\n theme_bw() +\n theme(plot.title = element_text(hjust = 0.5)) + #center title \n scale_x_continuous(breaks = seq(1980,2020,5)) +\n scale_y_continuous(breaks = scales::pretty_breaks(n=10)) #different way to add tick marks\ng5\n\n\n\n\n\n\n\n\nCases were steadily rising from 1980 to 1990, then suddenly feel off. Starting in the early 2010s there was a sharp increase and the amount of new and relapse cases just keep growing."
|
||
},
|
||
{
|
||
"objectID": "posts/2020-06-25_diabetes-prevalence-in-nc/diabetes-in-rural-north-carolina-exploring-prevalence-trends.html",
|
||
"href": "posts/2020-06-25_diabetes-prevalence-in-nc/diabetes-in-rural-north-carolina-exploring-prevalence-trends.html",
|
||
"title": "Diabetes in Rural North Carolina : Exploring Prevalence Trends",
|
||
"section": "",
|
||
"text": "Update\n2022-15-03: Since this was posted the CDC has updated how county level diabetes prevalance is calculated. The data presented here is using previous calcualtions and may no longer be correct. More can be read here\n\n\nAbstract\nDiabetes is growing at an epidemic rate in the United States. In North Carolina alone, diabetes and prediabetes cost an estimated $10.9 billion each year (American Diabetes Asssociation, 2015). This post introduces the exploration of the Diabetes epidemic in North Carolina. Through a series of posts this project will examine various public data available on diabetes and explore possible solutions to address the rise of diabetes in North Carolina. This investigation stems from the Capstone project of my Health Care Informatics Masters program. This post will answer the following questions:\n\n\nWhat is the overall trend of diabetes prevalence in the United States?\n\n\n\n\nWhat is the trend of diabetes at a State Level and how does diabetes prevalence vary by state and region?\n\n\n\n\nHow do trends in diabetes prevalence vary across counties of North Carolina?\n\n\n\n\nIn which counties of North Carolina does the largest change in diabetes prevalence occur?\n\n\n\n\nHow does change in diabetes prevalence compare between rural and urban counties?\n\n\n\n\nEnviroment\nThis section contains technical information for deeper analysis and reproduction. Casual readers are invited to skip it.\nPackages used in this report.\n\n\nCode\n# Attach these packages so their functions don't need to be qualified: http://r-pkgs.had.co.nz/namespace.html#search-path\nlibrary(magrittr) # enables piping : %>%\nlibrary(dplyr) # data wrangling\nlibrary(ggplot2) # graphs\nlibrary(tidyr) # data tidying\nlibrary(maps)\nlibrary(mapdata)\nlibrary(sf)\nlibrary(readr)\n\n\nDefinitions of global object (file paths, factor levels, object groups ) used throughout the report.\n\n\nCode\n#set ggplot theme\nggplot2::theme_set(theme_bw())\n\n\n\n\nData\nThe data for this exploration comes from several sources:\n\nThe Diabetes data set for state and county levels were sourced from the US Diabetes Surveillance System; Division of Diabetes Translation - Centers for Disease Control and Prevention. The data was downloaded one year per file, and compiled into a single data set for analysis.\nThe Diabetes data set for National level data were sourced from the CDC’s National Health Interview Survey (NHIS)\nThe list of rural counties was taken from The Office of Rural Health Policy, the list is available here\n\n\n\n\nCode\n# load the data, and have all column names in lowercase\n\nnc_diabetes_data_raw <- read_csv(\"https://raw.githubusercontent.com/mmmmtoasty19/nc-diabetes-epidemic-2020/62bdaa6971fbdff09214de7c013d40122abbe40d/data-public/derived/nc-diabetes-data.csv\") %>% \n rename_all(tolower)\n\nus_diabetes_data_raw <- read_csv(\"https://github.com/mmmmtoasty19/nc-diabetes-epidemic-2020/raw/62bdaa6971fbdff09214de7c013d40122abbe40d/data-public/raw/us_diabetes_totals.csv\"\n ,skip = 2)\n\nrural_counties <- read_csv(\"https://github.com/mmmmtoasty19/nc-diabetes-epidemic-2020/raw/b29bfd93b20b73a7000d349cb3b55fd0822afe76/data-public/metadata/rural-counties.csv\")\n\ncounty_centers_raw <- read_csv(\"https://github.com/mmmmtoasty19/nc-diabetes-epidemic-2020/raw/b29bfd93b20b73a7000d349cb3b55fd0822afe76/data-public/raw/nc_county_centers.csv\", col_names = c(\"county\", \"lat\",\"long\"))\n\ndiabetes_atlas_data_raw <- read_csv(\"https://raw.githubusercontent.com/mmmmtoasty19/nc-diabetes-epidemic-2020/b29bfd93b20b73a7000d349cb3b55fd0822afe76/data-public/raw/DiabetesAtlasData.csv\"\n ,col_types = cols(LowerLimit = col_skip(), \n UpperLimit = col_skip(),\n Percentage = col_double()), skip = 2)\n\n\n\n\n\nCode\n# load in both US State Map and NC County Map\n\nnc_counties_map_raw <- st_as_sf(map(\"county\",region = \"north carolina\", plot = FALSE,fill = TRUE)) %>% \n mutate_at(\"ID\", ~stringr::str_remove(.,\"north carolina,\"))\n\nstate_map_raw <- st_as_sf(map(\"state\",plot = FALSE,fill = TRUE ))\n\nnc_cities <- st_as_sf(read_csv(\"https://github.com/mmmmtoasty19/nc-diabetes-epidemic-2020/raw/b29bfd93b20b73a7000d349cb3b55fd0822afe76/data-public/metadata/nc_cities.csv\"),\n coords = c(\"long\", \"lat\")\n ,remove = FALSE\n ,agr = \"constant\"\n ,crs = 4326)\n\n\n\n\nData Manipulation\nThe combined data used in this anaylsis can be downloaded here. The only tweaks done here are to combine the rural counties column, and the data for creating maps.\n\nTweaks\n\n\n\nCode\ncounty_centers <- county_centers_raw %>% \n mutate_all(~stringr::str_replace_all(.,\n c(\"\\\\°\" = \"\"\n ,\"\\\\+\" = \"\"\n ,\"\\\\–\" = \"-\"\n )\n ) \n ) %>%\n mutate(across(c(\"lat\",\"long\"), ~iconv(.,from = 'UTF-8', to = 'ASCII//TRANSLIT'))\n ,across(c(\"lat\",\"long\"),~stringr::str_remove_all(.,\"\\\\?\"))) %>% \n mutate_at(c(\"lat\",\"long\"),as.numeric) %>%\n mutate(across(\"long\", ~(. * -1))) %>% \n mutate_at(\"county\", tolower)\n\n\n\n\nus_diabetes_data <- us_diabetes_data_raw %>% \n filter(Year >= 2000) %>% \n select( \"Year\",\"Total - Percentage\") %>% \n rename(year = Year , us_pct = `Total - Percentage`)\n\ndiabetes_atlas_data <- diabetes_atlas_data_raw %>% \n mutate_at(\"State\", tolower) %>% \n filter(Year >= 2000)\n\nstate_map_abb <- state_map_raw %>% \n left_join(read_csv(\"https://github.com/mmmmtoasty19/nc-diabetes-epidemic-2020/raw/b29bfd93b20b73a7000d349cb3b55fd0822afe76/data-public/metadata/state-abb.csv\") %>% \n mutate_at(\"state\", tolower)\n ,by = c(\"ID\" = \"state\") )\n\n\n\n\nMerge\n\n\n\nCode\n#join US totals to NC data \n\nnc_diabetes_data <- nc_diabetes_data_raw %>% \n mutate_at(\"county\", ~stringr::str_replace_all(.,\"Mcdowell\",\"McDowell\")) %>% \n mutate(\n rural = county %in% rural_counties$rural_counties\n ) %>% \n mutate_at(\"county\",tolower) %>% \n left_join(us_diabetes_data)\n\n\nnc_counties_map <- nc_counties_map_raw %>% \n left_join(nc_diabetes_data, by = c(\"ID\" = \"county\")) %>% \n left_join(county_centers, by = c(\"ID\" = \"county\")) %>% \n rename(\n center_long = long\n ,center_lat = lat)\n\nstate_map <- state_map_abb %>% \n left_join(diabetes_atlas_data, by = c(\"ID\" = \"State\")) %>% \n rename_all(tolower)\n\n\n\n\n\nOverall - National Level\n\n\nCode\nus_diabetes_data <- us_diabetes_data %>% \n mutate(\n change = lead(us_pct) - us_pct\n ,change = if_else(change > 0, TRUE, FALSE)\n ) %>% \n mutate_at(\"change\", ~stringr::str_replace_na(.,\"NA\"))\n\n\n\no_g1 <- us_diabetes_data %>% \n ggplot(aes(x = year, y = us_pct)) +\n geom_line(color= \"#D95F02\") +\n # geom_line(aes(color = change, group = 1)) +\n geom_point(shape = 21, size = 3,color= \"#D95F02\") +\n # geom_point(aes(color = change),shape = 21, size = 3) +\n scale_color_manual(values = c(\n \"TRUE\" = \"#D95F02\"\n ,\"FALSE\" = \"#7570B3\"\n ), guide = FALSE) +\n labs(\n title = \"Percentage of Diagnosed Diabetes in Adults (18+), National Level\"\n ,x = NULL\n ,y = NULL\n ,caption = \"Note: Data from the CDC's National Health Interview Survey (NHIS)\"\n )\n\no_g1\n\n\n\n\n\n\n\n\n\nOverall, the national average for diagnosed diabetes sharply rose through the early 2000’s, leveling off around 2010. These numbers however, are estimates based on the self-reported response to the CDC’s National Health Interview Survey, and do not represent the actual confirmed diagnoses. The CDC estimates that 1 in 5 adults have undiagnosed diabetes, therefore the numbers reported by the NHIS are likely to underestimate the true prevalence (Centers for Disease Control and Prevention, 2020).\n\n\nOverall - State Level\nState and County level data on diabetes prevalence are taken from the CDC’s Behavioral Risk Factor Surveillance System (BRFSS). These results are based on the question “Has a doctor, nurse, or other health professional ever told you that you have diabetes?”. Women who only experienced diabetes during pregnancy were excluded from the counts. The BRFSS is an ongoing, monthly telephone survey of the non-institutionalized adults (aged 18 years or older) in each state. The year 2011 saw a major change to the methodology of the survey, which started to include homes without a landline phone. This change was expected to increase coverage of lower income, lower educational levels, and younger age groups, because these groups often exclusively rely on cellular telephones for personal communication.(Pierannunzi et al., 2012)\n\n\nCode\ns_g1 <- state_map %>% \n st_drop_geometry() %>% \n ggplot(aes(x = year, y = percentage, color = region)) +\n geom_line(aes(group = id ),alpha = 0.3,na.rm = TRUE) +\n geom_smooth(method = \"lm\", se = FALSE) +\n ggpmisc::stat_poly_eq(formula = y ~ + x ,\n aes(label = paste(..eq.label.., ..rr.label.., sep = \"~~~\")), \n parse = TRUE) +\n geom_vline(xintercept = 2011, linetype = \"dashed\", color = \"gray\") +\n scale_color_brewer(palette = \"Dark2\"\n ,direction = -1\n ,labels = snakecase::to_title_case\n ) +\n labs(\n title = \"Percentage of Diagnosed Diabetes in Adults (18+) \\nby State and Region\"\n ,x = NULL\n ,y = NULL\n ,color = \"Region\"\n ,caption = \"Regions from US Census Bureau\"\n ) \n\ns_g1\n\n\n\n\n\n\n\n\n\nThe above graph shows diabetes prevalence trends by state, grouped into regions based on the US Census classification regions. While all regions of the United states show positive growth in diabetes prevalence, the south exhibits a slightly higher growth rate, as well as the highest prevalence.\n\n\nCode\ns_g2 <- state_map %>% \n st_drop_geometry() %>% \n filter(region == \"south\") %>% \n mutate_at(\"id\", ~snakecase::to_title_case(.)) %>% \n ggplot(aes(x = year, y = percentage)) +\n geom_line(aes(group = id ),na.rm = TRUE, color= \"#D95F02\") +\n gghighlight::gghighlight(id == \"North Carolina\", label_params = list(vjust = 3)) +\n scale_y_continuous(breaks = seq(5,13,2)) +\n scale_x_continuous(minor_breaks = seq(2000,2016,1)) +\n labs(\n title = \"Percentage of Diagnosed Diabetes in Adults (18+) \\nSouth Region\"\n ,x = NULL\n ,y = NULL\n ,caption = \"Regions from US Census Bureau\"\n ) +\n theme()\n\ns_g2\n\n\n\n\n\n\n\n\n\nWhen focusing on the south region, North Carolina falls close to the middle of diabetes prevalence.\n\n\nOverall - North Carolina\nWhen examining the trajectory for North Carolina, we can see that it has been consistently higher than national average . We see that in 2016 there was a large spike in diagnosed cases; unfortunately this is the last available year so it is unclear whether the upward trend continues. The graph below compares state-level average to the national average. Notice that the trend line is slightly higher than in the previous graphs: this is due to the age cut offs used for National and State level data vs County Level data. Previous data used 18 years of age as a cutoff for classifying adults, whereas the county level data uses 20. Due to removing 18- and 19-year-olds from the population, who typically have less diagnosed cases of diabetes than those of older ages, the computed prevalence increases slightly.\n\n\nCode\nd1 <- nc_diabetes_data %>% \n group_by(year) %>% \n summarise(\n pct = mean(percentage)\n ,us_pct = mean(us_pct)\n ) %>% \n pivot_longer(\n cols = c(\"pct\", \"us_pct\")\n ,names_to = \"metric\"\n ,values_to = \"values\"\n ) %>% \n mutate(\n metric = factor(metric\n ,levels = c(\"pct\",\"us_pct\")\n ,labels = c(\"NC\", \"National\"))\n )\n\nnc_g1 <- d1 %>% \n ggplot(aes(x = year, y = values, color = metric)) +\n geom_line() +\n geom_point(shape = 21, size = 3) +\n geom_vline(xintercept = 2011, linetype = \"dashed\", color = \"gray\") +\n scale_y_continuous(labels = function(x) paste0(x, \"%\")) +\n scale_color_brewer(palette = \"Dark2\") +\n labs(\n x = NULL\n ,y = NULL\n ,color = NULL\n ,title = \"Percent of Adults (20+) with Diagnosed Diabetes\"\n )\n\nnc_g1 \n\n\n\n\n\n\n\n\n\nWe see a spike in 2016, the last year for which the data are available. However, we should be careful with our interpretation of this pattern, because the examination of the county-level trajectories reveals an aberration in the trend that requires a more rigorous investigation.\n\n\nCode\nnc_g1a <- nc_diabetes_data %>% \n ggplot(aes(x = year, y = percentage)) +\n geom_line(aes(group = county),alpha = 0.4) +\n labs(\n x = NULL\n ,y = NULL\n ,color = NULL\n )\n\nnc_g1a\n\n\n\n\n\n\n\n\n\nWhile all of North Carolina has a higher prevalence than the national average, rural counties have systematically higher prevalence of diabetes than urban counties. Note that after 2011 both Urban and Rural counties break the upward trend exhibited in the previous 5 years. This could be explained by the addition of cell phones to the BRFS Survey as many rural areas are often lower income areas and may only rely on a cell phone for communication. As mentioned previously there is an odd spike in case in 2016 that can’t be explained by current documentation. For the purpose of this evaluation 2016 will be excluded from the county level data since the odd trend can not be explained and no further data is available to determine if this is a real spike or could be attributed to methodology change or data quality.\n\n\nCode\nd2 <- nc_diabetes_data %>% \n select(-us_pct) %>% \n mutate(\n pct_rural = if_else(rural == TRUE, percentage, FALSE)\n ,pct_urban = if_else(rural == FALSE, percentage, FALSE)\n ) %>% \n select(-countyfips,-percentage) %>% \n group_by(year) %>% \n summarise(\n pct_rural = mean(pct_rural,na.rm = TRUE)\n ,pct_urban = mean(pct_urban,na.rm = TRUE)\n ) %>% left_join(us_diabetes_data) %>% \n pivot_longer(\n cols = c(\"us_pct\", \"pct_rural\",\"pct_urban\")\n ,names_to = \"metric\"\n ,values_to = \"value\"\n ,values_drop_na = TRUE\n ) %>% \n mutate(\n metric = factor(metric,\n levels = c(\"pct_rural\",\"pct_urban\",\"us_pct\")\n ,labels = c(\"Rural\",\"Urban\",\"US\")\n )\n )\n\nnc_g2 <- d2 %>% ggplot(aes(x = year, y = value, color = metric)) +\n geom_line() +\n geom_point(shape = 21, size = 3) +\n geom_vline(xintercept = 2011, linetype = \"dashed\", color = \"gray\") +\n scale_y_continuous(labels = function(x) paste0(x, \"%\")) +\n scale_color_brewer(palette = \"Dark2\") +\n labs(\n x = NULL\n ,y = NULL\n ,color = NULL\n ,title = \"Percent of Adults (20+) with Diagnosed Diabetes \\nDisplaying Rural vs Urban\"\n )\n\nnc_g2\n\n\n\n\n\n\n\n\n\n\n\nBy County - Geographical\nCounty level data first became available in 2004, three years of data is used to arrive at these estimates. For example, the 2006 estimates were computed using the data from 2005, 2006, and 2007 BRFS survey rounds. The county-level estimates were based on indirect model-dependent estimates using Bayesian multilevel modeling techniques(Barker et al., 2013; JNK, 2003). This model-dependent approach employs a statistical model that “borrows strength” in making an estimate for one county from BRFSS data collected in other counties and states. Multilevel Binomial regression models with random effects of demographic variables (age 20-44, 45-64, >=65; race/ethnicity; sex) at the county-level were developed. Estimates were adjusted for age to the 2000 US standard population using age groups of 20-44, 45-64, and 65 or older(Klein & Schoenborn, 2001).\n\n\nCode\ng50 <- nc_diabetes_data %>% \n filter(year < 2015) %>% \n mutate(\n rural = factor(rural\n ,levels = c(TRUE,FALSE)\n ,labels = c(\"Rural\", \"Urban\")\n )\n ) %>% \n ggplot(aes(x = year, y = percentage, color = rural)) +\n geom_line(aes(group = county),alpha = 0.3) +\n geom_smooth(aes(group = rural), method = \"loess\", se= FALSE, size = 1.1) +\n scale_color_brewer(palette = \"Dark2\") +\n labs(\n title = \"Percent of Adults (20+) with Diagnosed Diabetes \\nAll North Carolina Counties\"\n ,x = NULL\n ,y = NULL\n ,color = NULL\n )\n\ng50\n\n\n\n\n\n\n\n\n\nWhen viewing all county trend lines together, we see that the loess line for both urban and rural follows a similar trend for the time period.\nThe following graphs displays the total estimated prevalence of Diabetes in each off the 100 North Carolina counties. To keep the scaling consistent between the graphs, we binned the estimates into 6 intervals of the same size. Rural counties are highlighted with a stronger border line as well as a letter “R” in respective geographic centers. These graphs allow us to view geographical clusters of diabetes prevalence.\n\n\nCode\nnc_counties_map_binned <- nc_counties_map %>% \n filter(year < 2015) %>% \n mutate(\n bin = dlookr::binning(.$percentage, nbins = 6 ,type = \"equal\")\n ,bin = forcats::fct_recode(bin\n ,\"6.5 - 7.9\" = \"[6.5,7.97]\"\n ,\"8.0 - 9.4\" = \"(7.97,9.43]\" \n ,\"9.5 - 10.9\" = \"(9.43,10.9]\" \n ,\"11.0 - 12.4\" = \"(10.9,12.4]\"\n ,\"12.5 - 13.8\" = \"(12.4,13.8]\" \n ,\"13.9 - 15.3\" = \"(13.8,15.3]\"\n )\n )\n\nc_g1 <- nc_counties_map_binned %>% \n filter(year %in% c(2006,2014)) %>% \n ggplot() +\n geom_sf() + #blank geom_sf keeps gridlines from overlapping map\n geom_sf(aes(fill = bin,color = rural)) +\n geom_sf(data = nc_cities) +\n ggrepel::geom_text_repel(data = nc_cities, \n aes(x = long, y = lat, label = city)\n ,nudge_y = c(-1,1,1,-1,1)\n ,nudge_x = c(0,0,0,-1,0)\n ) +\n geom_text(data = . %>% filter(rural == TRUE)\n ,aes(x = center_long, y = center_lat)\n ,label = \"R\"\n ,color = \"#696969\"\n ) +\n coord_sf(xlim = c(-84.5,-75.5), ylim = c(33.75,37)) +\n facet_wrap(~year) +\n scale_fill_viridis_d(alpha = 0.6, direction = -1) +\n scale_color_manual(\n values = c(\n \"FALSE\" = \"gray\"\n ,\"TRUE\" = \"black\"\n ),guide = 'none') +\n labs(\n title = \"Estimated Diabetes in Adults (20+) by County\"\n ,fill = \"Percentage\"\n ,y = NULL\n ,x = NULL\n ) +\n theme(\n panel.background = element_rect(fill = \"aliceblue\")\n ,panel.grid.major = element_line(color = \"#D4D4D4\", linetype = \"dashed\", \n size = 0.5)\n ,legend.position = \"bottom\"\n ,plot.title = element_text(hjust = 0.5)\n )\n\nc_g1\n\n\n\n\n\n\n\n\n\nThe following box plot displays the distribution of estimated cases by county from 2006 to 2014. For all years of current data the mean of rural counties is higher then that of their Urban counterparts.\n\n\nCode\nc_g1c <- nc_counties_map %>% \n mutate(\n rural = factor(rural\n ,levels = c(TRUE,FALSE)\n ,labels = c(\"Rural\", \"Urban\")\n )) %>% \n filter(year < 2015) %>%\n ggplot(aes(x = year, y = percentage, group = interaction(year,rural), fill = rural)) +\n geom_boxplot(alpha = 0.5) +\n scale_fill_brewer(palette = \"Dark2\") +\n scale_x_continuous(breaks = seq(2004,2014,2)) +\n labs(\n x = NULL\n ,y = NULL\n ,fill = NULL\n ,title = \"Distribution of Estimated Cases by County 2006 - 2014\"\n )\n\nc_g1c\n\n\n\n\n\n\n\n\n\n\n\nBy County - Percent Change\nThe following graphs display the overall change in estimated prevalence between 2006 to 2014.\n\n\nCode\nd3 <- nc_counties_map %>% \n st_drop_geometry() %>% \n filter(year %in% c(2006,2014)) %>% \n select(-countyfips,-us_pct) %>% \n pivot_wider(names_from = \"year\"\n ,values_from = \"percentage\") %>% \n mutate(\n pct_p = `2014` - `2006`\n ,pct_c = ((`2014` - `2006`)/`2006`) * 100\n ) %>% \n left_join(nc_counties_map_raw) %>% \n st_as_sf()\n\n\nc_g4 <- d3 %>% \n ggplot() +\n geom_sf() + #blank geom_sf keeps gridlines from overlapping map\n geom_sf(aes(fill = pct_c ,color = rural)) +\n geom_sf(data = nc_cities) +\n ggrepel::geom_text_repel(data = nc_cities, \n aes(x = long, y = lat, label = city)\n ,nudge_y = c(-1,1,1,-1,1)\n ,nudge_x = c(0,0,0,-1,0)\n ) +\n geom_text(data = . %>% filter(rural == TRUE)\n ,aes(x = center_long, y = center_lat)\n ,label = \"R\"\n ,color = \"#696969\"\n ) +\n # scale_fill_viridis_c(alpha = 0.6, direction = -1) +\n scale_fill_gradient2(\n low = \"#d01c8b\"\n ,mid = \"#f7f7f7\"\n ,high = \"#4dac26\"\n ,midpoint = 0\n ) +\n scale_color_manual(\n values = c(\n \"FALSE\" = \"gray\"\n ,\"TRUE\" = \"black\"\n ),guide = 'none') +\n labs(\n title = \"Percentage Change of Diagnosed Diabetes 2006-2014\"\n ,fill = \"Percentage\"\n ,y = NULL\n ,x = NULL\n ) +\n theme(\n panel.background = element_rect(fill = \"aliceblue\")\n ,panel.grid.major = element_line(color = \"#D4D4D4\", linetype = \"dashed\", \n size = 0.5)\n )\n\nc_g4\n\n\n\n\n\n\n\n\n\nThe following chart displays the density curve of the percentage change for both rural and urban counties. It is notable that the mean of change for Urban counties is actually higher than the mean for rural counties. However, we also see that most change for both regions is positive growth. In fact only 16 rural, and 10 Urban counties experienced negative change in the given time frame. While 35 rural and 34 urban counties experience growth in the same period.\n\n\nCode\nd4 <- d3 %>% \n st_drop_geometry() %>% \n mutate(\n rural = factor(rural\n ,levels = c(TRUE,FALSE)\n ,labels = c(\"Rural\", \"Urban\")\n )\n )\n\n\nmean_d4 <- d4 %>% \n group_by(rural) %>% \n summarise(.groups = \"keep\"\n ,pct_c = mean(pct_c)\n )\n\ng51 <- d4 %>% \n ggplot(aes(x = pct_c, fill = rural, y = ..density.., color = rural)) +\n geom_histogram(binwidth = 5, position = \"identity\", alpha = 0.3) +\n geom_density(alpha = 0.5) +\n facet_wrap(~rural, ncol = 1) +\n geom_vline(aes(xintercept = pct_c), data = mean_d4) +\n geom_text(aes(x = pct_c, y = 0.038, label = round(pct_c, 2))\n ,data = mean_d4\n ,hjust = -0.15\n ,size = 5\n ,color = \"#000000\") +\n geom_vline(xintercept = 0, linetype = \"dashed\", color = \"#696969\") +\n scale_color_brewer(palette = \"Dark2\", guide = NULL) +\n scale_fill_brewer(palette = \"Dark2\", guide = NULL) +\n labs(\n x = \"Percentage Change\"\n ,y = \"Density\"\n ,fill = NULL\n )\ng51\n\n\n\n\n\n\n\n\n\n\n\nConclusion and Next Steps\nThe original hypothesis of this report was that rural counties were growing at a higher rate then there urban counterparts. Through out this post it has been shown that this hypothesis is incorrect, just being a rural county does not indicate diabetes growth, in fact the growth rate throughout North Carolina has been consistent. Further posts will explore other reasons for these trends, as the current post merely explores the trends and differences using data visualizations, a more rigorous and formal evaluation of these comparison is in order.\n\n\nSession information\n===========================================================================\nFor the sake of documentation and reproducibility, the current report was rendered in the following environment. Click the line below to expand.\n\n\nEnvironment \n\n\n\nCode\nif( requireNamespace(\"devtools\", quietly = TRUE) ) {\n devtools::session_info()\n} else {\n sessionInfo()\n} \n\n\n─ Session info ───────────────────────────────────────────────────────────────\n setting value\n version R version 4.4.1 (2024-06-14 ucrt)\n os Windows 11 x64 (build 22621)\n system x86_64, mingw32\n ui RTerm\n language (EN)\n collate English_United States.utf8\n ctype English_United States.utf8\n tz America/New_York\n date 2024-08-27\n pandoc 3.2 @ C:/Users/belangew/AppData/Local/Pandoc/ (via rmarkdown)\n\n─ Packages ───────────────────────────────────────────────────────────────────\n package * version date (UTC) lib source\n bit 4.0.5 2022-11-15 [1] CRAN (R 4.4.1)\n bit64 4.0.5 2020-08-30 [1] CRAN (R 4.4.1)\n cachem 1.1.0 2024-05-16 [1] CRAN (R 4.4.1)\n class 7.3-22 2023-05-03 [1] CRAN (R 4.4.1)\n classInt 0.4-10 2023-09-05 [1] CRAN (R 4.4.1)\n cli 3.6.3 2024-06-21 [1] CRAN (R 4.4.1)\n colorspace 2.1-1 2024-07-26 [1] CRAN (R 4.4.1)\n confintr 1.0.2 2023-06-04 [1] CRAN (R 4.4.1)\n crayon 1.5.3 2024-06-20 [1] CRAN (R 4.4.1)\n crul 1.5.0 2024-07-19 [1] CRAN (R 4.4.1)\n curl 5.2.1 2024-03-01 [1] CRAN (R 4.4.1)\n DBI 1.2.3 2024-06-02 [1] CRAN (R 4.4.1)\n devtools 2.4.5 2022-10-11 [1] CRAN (R 4.4.1)\n digest 0.6.36 2024-06-23 [1] CRAN (R 4.4.1)\n dlookr 0.6.3 2024-02-07 [1] CRAN (R 4.4.1)\n dplyr * 1.1.4 2023-11-17 [1] CRAN (R 4.4.1)\n e1071 1.7-14 2023-12-06 [1] CRAN (R 4.4.1)\n ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.4.1)\n evaluate 0.24.0 2024-06-10 [1] CRAN (R 4.4.1)\n extrafont 0.19 2023-01-18 [1] CRAN (R 4.4.0)\n extrafontdb 1.0 2012-06-11 [1] CRAN (R 4.4.0)\n fansi 1.0.6 2023-12-08 [1] CRAN (R 4.4.1)\n farver 2.1.2 2024-05-13 [1] CRAN (R 4.4.1)\n fastmap 1.2.0 2024-05-15 [1] CRAN (R 4.4.1)\n fontBitstreamVera 0.1.1 2017-02-01 [1] CRAN (R 4.4.0)\n fontLiberation 0.1.0 2016-10-15 [1] CRAN (R 4.4.0)\n fontquiver 0.2.1 2017-02-01 [1] CRAN (R 4.4.1)\n forcats 1.0.0 2023-01-29 [1] CRAN (R 4.4.1)\n fs 1.6.4 2024-04-25 [1] CRAN (R 4.4.1)\n gdtools 0.3.7 2024-03-05 [1] CRAN (R 4.4.1)\n generics 0.1.3 2022-07-05 [1] CRAN (R 4.4.1)\n gfonts 0.2.0 2023-01-08 [1] CRAN (R 4.4.1)\n gghighlight 0.4.1 2023-12-16 [1] CRAN (R 4.4.1)\n ggplot2 * 3.5.1 2024-04-23 [1] CRAN (R 4.4.1)\n ggpmisc 0.6.0 2024-06-28 [1] CRAN (R 4.4.1)\n ggpp 0.5.8-1 2024-07-01 [1] CRAN (R 4.4.1)\n ggrepel 0.9.5 2024-01-10 [1] CRAN (R 4.4.1)\n glue 1.7.0 2024-01-09 [1] CRAN (R 4.4.1)\n gridExtra 2.3 2017-09-09 [1] CRAN (R 4.4.1)\n gtable 0.3.5 2024-04-22 [1] CRAN (R 4.4.1)\n hms 1.1.3 2023-03-21 [1] CRAN (R 4.4.1)\n hrbrthemes 0.8.7 2024-03-04 [1] CRAN (R 4.4.1)\n htmltools 0.5.8.1 2024-04-04 [1] CRAN (R 4.4.1)\n htmlwidgets 1.6.4 2023-12-06 [1] CRAN (R 4.4.1)\n httpcode 0.3.0 2020-04-10 [1] CRAN (R 4.4.1)\n httpuv 1.6.15 2024-03-26 [1] CRAN (R 4.4.1)\n jsonlite 1.8.8 2023-12-04 [1] CRAN (R 4.4.1)\n kableExtra 1.4.0 2024-01-24 [1] CRAN (R 4.4.1)\n KernSmooth 2.23-24 2024-05-17 [1] CRAN (R 4.4.1)\n knitr 1.48 2024-07-07 [1] CRAN (R 4.4.1)\n labeling 0.4.3 2023-08-29 [1] CRAN (R 4.4.0)\n later 1.3.2 2023-12-06 [1] CRAN (R 4.4.1)\n lattice 0.22-6 2024-03-20 [1] CRAN (R 4.4.1)\n lifecycle 1.0.4 2023-11-07 [1] CRAN (R 4.4.1)\n magrittr * 2.0.3 2022-03-30 [1] CRAN (R 4.4.1)\n mapdata * 2.3.1 2022-11-01 [1] CRAN (R 4.4.1)\n maps * 3.4.2 2023-12-15 [1] CRAN (R 4.4.1)\n MASS 7.3-60.2 2024-04-26 [1] CRAN (R 4.4.1)\n Matrix 1.7-0 2024-04-26 [1] CRAN (R 4.4.1)\n MatrixModels 0.5-3 2023-11-06 [1] CRAN (R 4.4.1)\n memoise 2.0.1 2021-11-26 [1] CRAN (R 4.4.1)\n mgcv 1.9-1 2023-12-21 [1] CRAN (R 4.4.1)\n mime 0.12 2021-09-28 [1] CRAN (R 4.4.0)\n miniUI 0.1.1.1 2018-05-18 [1] CRAN (R 4.4.1)\n munsell 0.5.1 2024-04-01 [1] CRAN (R 4.4.1)\n nlme 3.1-164 2023-11-27 [1] CRAN (R 4.4.1)\n pagedown 0.20 2022-12-13 [1] CRAN (R 4.4.1)\n pillar 1.9.0 2023-03-22 [1] CRAN (R 4.4.1)\n pkgbuild 1.4.4 2024-03-17 [1] CRAN (R 4.4.1)\n pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.4.1)\n pkgload 1.4.0 2024-06-28 [1] CRAN (R 4.4.1)\n polynom 1.4-1 2022-04-11 [1] CRAN (R 4.4.1)\n profvis 0.3.8 2023-05-02 [1] CRAN (R 4.4.1)\n promises 1.3.0 2024-04-05 [1] CRAN (R 4.4.1)\n proxy 0.4-27 2022-06-09 [1] CRAN (R 4.4.1)\n purrr 1.0.2 2023-08-10 [1] CRAN (R 4.4.1)\n quantreg 5.98 2024-05-26 [1] CRAN (R 4.4.1)\n R6 2.5.1 2021-08-19 [1] CRAN (R 4.4.1)\n RColorBrewer 1.1-3 2022-04-03 [1] CRAN (R 4.4.0)\n Rcpp 1.0.13 2024-07-17 [1] CRAN (R 4.4.1)\n reactable 0.4.4 2023-03-12 [1] CRAN (R 4.4.1)\n readr * 2.1.5 2024-01-10 [1] CRAN (R 4.4.1)\n remotes 2.5.0 2024-03-17 [1] CRAN (R 4.4.1)\n rlang 1.1.4 2024-06-04 [1] CRAN (R 4.4.1)\n rmarkdown 2.28 2024-08-17 [1] CRAN (R 4.4.1)\n rstudioapi 0.16.0 2024-03-24 [1] CRAN (R 4.4.1)\n Rttf2pt1 1.3.12 2023-01-22 [1] CRAN (R 4.4.0)\n scales 1.3.0 2023-11-28 [1] CRAN (R 4.4.1)\n sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.4.1)\n sf * 1.0-16 2024-03-24 [1] CRAN (R 4.4.1)\n shiny 1.9.1 2024-08-01 [1] CRAN (R 4.4.1)\n showtext 0.9-7 2024-03-02 [1] CRAN (R 4.4.1)\n showtextdb 3.0 2020-06-04 [1] CRAN (R 4.4.1)\n snakecase 0.11.1 2023-08-27 [1] CRAN (R 4.4.1)\n SparseM 1.84-2 2024-07-17 [1] CRAN (R 4.4.1)\n stringi 1.8.4 2024-05-06 [1] CRAN (R 4.4.0)\n stringr 1.5.1 2023-11-14 [1] CRAN (R 4.4.1)\n survival 3.6-4 2024-04-24 [1] CRAN (R 4.4.1)\n svglite 2.1.3 2023-12-08 [1] CRAN (R 4.4.1)\n sysfonts 0.8.9 2024-03-02 [1] CRAN (R 4.4.1)\n systemfonts 1.1.0 2024-05-15 [1] CRAN (R 4.4.1)\n tibble 3.2.1 2023-03-20 [1] CRAN (R 4.4.1)\n tidyr * 1.3.1 2024-01-24 [1] CRAN (R 4.4.1)\n tidyselect 1.2.1 2024-03-11 [1] CRAN (R 4.4.1)\n tzdb 0.4.0 2023-05-12 [1] CRAN (R 4.4.1)\n units 0.8-5 2023-11-28 [1] CRAN (R 4.4.1)\n urlchecker 1.0.1 2021-11-30 [1] CRAN (R 4.4.1)\n usethis 3.0.0 2024-07-29 [1] CRAN (R 4.4.1)\n utf8 1.2.4 2023-10-22 [1] CRAN (R 4.4.1)\n vctrs 0.6.5 2023-12-01 [1] CRAN (R 4.4.1)\n viridisLite 0.4.2 2023-05-02 [1] CRAN (R 4.4.1)\n vroom 1.6.5 2023-12-05 [1] CRAN (R 4.4.1)\n withr 3.0.0 2024-01-16 [1] CRAN (R 4.4.1)\n xfun 0.46 2024-07-18 [1] CRAN (R 4.4.1)\n xml2 1.3.6 2023-12-04 [1] CRAN (R 4.4.1)\n xtable 1.8-4 2019-04-21 [1] CRAN (R 4.4.1)\n yaml 2.3.10 2024-07-26 [1] CRAN (R 4.4.1)\n\n [1] C:/Program Files/R/R-4.4.1/library\n\n──────────────────────────────────────────────────────────────────────────────\n\n\n\n\n\n\n\n\n\n\n\nReferences\n\nAmerican Diabetes Asssociation. (2015). The burden of diabetes in north carolina. http://main.diabetes.org/dorg/PDFs/Advocacy/burden-of-diabetes/north-carolina.pdf\n\n\nBarker, L. E., Thompson, T. J., Kirtland, K. A., Boyle, J. P., Geiss, L. S., McCauley, M. M., & Albright, A. L. (2013). Bayesian small area estimates of diabetes incidence by united states county, 2009. Journal of Data Science, 11(1), 269–280. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4537395/\n\n\nCenters for Disease Control and Prevention. (2020). National diabetes statistics report. US Department of Health and Human Services. https://www.cdc.gov/diabetes/pdfs/data/statistics/national-diabetes-statistics-report.pdf\n\n\nJNK, R. (2003). Small area estimation. https://onlinelibrary.wiley.com/doi/pdf/10.1002/0471722189.fmatter\n\n\nKlein, R. J., & Schoenborn, C. A. (2001). Age adjustment using the 2000 projected u.s. population. Healthy People 2000 Stat Notes, 20, 1–9.\n\n\nPierannunzi, C., Town, M., Garvin, W., Shaw, F. E., & Balluz, L. (2012). Methodologic changes in the behavioral risk factor surveillance system in 2011 and potential effects on prevalence estimates. Morbidity and Mortality Weekly Report, 61(22), 410–413. https://www.cdc.gov/mmwr/pdf/wk/mm6122.pdf\n\nReuseCC BY 4.0CitationBibTeX citation:@online{belanger2020,\n author = {Belanger, Kyle},\n title = {Diabetes in {Rural} {North} {Carolina} : {Exploring}\n {Prevalence} {Trends}},\n date = {2020-06-25},\n langid = {en}\n}\nFor attribution, please cite this work as:\nBelanger, K. (2020, June 25). Diabetes in Rural North Carolina :\nExploring Prevalence Trends."
|
||
},
|
||
{
|
||
"objectID": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html",
|
||
"href": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html",
|
||
"title": "Converting From Blogdown to Distill",
|
||
"section": "",
|
||
"text": "I have since converted this blog to a quarto blog, but am leaving this post up in case anyone finds it useful"
|
||
},
|
||
{
|
||
"objectID": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html#code-folding",
|
||
"href": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html#code-folding",
|
||
"title": "Converting From Blogdown to Distill",
|
||
"section": "Code Folding",
|
||
"text": "Code Folding\nWhen I converted my blog on 12/30/2020, code folding was not included as an option by default in distill. At that time, an excellent package called Codefolder added the functionality. Since going live with the blog, code folding has been added to distill.1 Code folding is available for either the whole document or individual code sections. The default caption is “Show Code”, but instead of typing code_folding=TRUE, you can provide a string to change the caption.\n\n# Some awesome code \n# That does awesome things"
|
||
},
|
||
{
|
||
"objectID": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html#customizing-the-home-page",
|
||
"href": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html#customizing-the-home-page",
|
||
"title": "Converting From Blogdown to Distill",
|
||
"section": "Customizing the Home Page",
|
||
"text": "Customizing the Home Page\nBy default, a distill blog’s home page will be the blog index page. I chose to edit my home page to be a landing page for myself and then have the blog index as a separate page. When creating a new blog, this is the default YAML header for your index page.\n---\ntitle: \"New Site\"\nsite: distill::distill_website\nlisting: posts\n---\nThe critical piece here is the line site: distill::distill_website. This line is what is needed to render the website. For my home page, I decided to use the package Postcard, which is used to generate simple landing pages. I won’t go into every step as there is already a great post by Alison Hill on how to do that. However, I will point out the most crucial part of the new index page the YAML header needs to contain these two lines.\noutput:\n postcards::trestles\nsite: distill::distill_website"
|
||
},
|
||
{
|
||
"objectID": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html#footnotes",
|
||
"href": "posts/2021-01-12_blogdown-to-distill/creating-a-distill-blog.html#footnotes",
|
||
"title": "Converting From Blogdown to Distill",
|
||
"section": "Footnotes",
|
||
"text": "Footnotes\n\n\nNote that as of publishing, code folding is only available in the development version of distill↩︎"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-05-15-US-NHL-Birthrate/index.html",
|
||
"href": "posts/2024-05-15-US-NHL-Birthrate/index.html",
|
||
"title": "Does a US Born Players Birthdate affect their shot at the NHL",
|
||
"section": "",
|
||
"text": "This post is inspired by this fantastic blog post on Jlaws Blog. In it they explore how in the first chapter Malcolm Gladwell’s Outliers he discusses how in Canadian Junior Hockey there is a higher likelihood for players to be born in the first quarter of the year. As it appears cutoff dates for USA hockey are different and they are currently using June 1st (if my internet searches are to be believed), I wondered if the same analysis would hold true for American Born Players."
|
||
},
|
||
{
|
||
"objectID": "posts/2024-05-15-US-NHL-Birthrate/index.html#distribution-of-births-by-month-in-the-united-states",
|
||
"href": "posts/2024-05-15-US-NHL-Birthrate/index.html#distribution-of-births-by-month-in-the-united-states",
|
||
"title": "Does a US Born Players Birthdate affect their shot at the NHL",
|
||
"section": "Distribution of Births by Month in the United States",
|
||
"text": "Distribution of Births by Month in the United States\nThe data for US Birth Rates can be pulled from CDC Wonder. The particular table of interest is the Natality, 2007 - 2022. CDC Wonder has a quite interesting API that requires a request with quite a few XML parameters. Thankfully you can build the request on the website and a nice package already exists to send the query. Check out the Wonderapi Page for more info.\n\nusa_raw <- wonderapi::send_query(\"D66\", here::here(\"posts\", \"2024-05-15-US-NHL-Birthrate\", \"cdc_wonder_request.xml\"))\n\nusa_births <- usa_raw %>%\n dplyr::group_by(Month) %>%\n dplyr::summarise(country_births = sum(Births), .groups = \"drop\") %>%\n dplyr::mutate(country_pct = country_births / sum(country_births))\n\n\nDistribution of Births Compared to Expected\nThe data from CDC Wonder pulls in quite nice, the only addition is adding a column for expected Births. This column gives each day of each month an equal chance for a person being born. Based on the data the summer months (June through August), and September have a slightly higher actual birth vs expected. Based on cut off Dates many of these kids would be the oldest in their age groups.\n\nusa_births %>%\n dplyr::mutate(expected_births = dplyr::case_when(\n Month %in% c(\"April\", \"June\", \"September\", \"November\") ~ 30 / 365\n , Month == \"February\" ~ 28 / 365\n , .default = 31 / 365\n )\n , difference = country_pct - expected_births\n , dplyr::across(Month, ~factor(., levels = month.name))\n , dplyr::across(c(country_pct, expected_births, difference), ~scales::percent(., accuracy = .1))\n ) %>%\n dplyr::arrange(Month) %>%\n dplyr::rename_with(~stringr::str_replace_all(., \"_\", \" \")) %>%\n dplyr::rename_with(stringr::str_to_title) %>%\n kableExtra::kbl() %>%\n kableExtra::kable_styling()\n\n\n\n\nMonth\nCountry Births\nCountry Pct\nExpected Births\nDifference\n\n\n\n\nJanuary\n5118343\n8.2%\n8.5%\n-0.3%\n\n\nFebruary\n4758741\n7.6%\n7.7%\n-0.1%\n\n\nMarch\n5205579\n8.3%\n8.5%\n-0.2%\n\n\nApril\n5001651\n8.0%\n8.2%\n-0.3%\n\n\nMay\n5226642\n8.3%\n8.5%\n-0.2%\n\n\nJune\n5226141\n8.3%\n8.2%\n0.1%\n\n\nJuly\n5528731\n8.8%\n8.5%\n0.3%\n\n\nAugust\n5635283\n9.0%\n8.5%\n0.5%\n\n\nSeptember\n5448101\n8.7%\n8.2%\n0.5%\n\n\nOctober\n5348495\n8.5%\n8.5%\n0.0%\n\n\nNovember\n5059952\n8.1%\n8.2%\n-0.2%\n\n\nDecember\n5227828\n8.3%\n8.5%\n-0.2%"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-05-15-US-NHL-Birthrate/index.html#hockey-data",
|
||
"href": "posts/2024-05-15-US-NHL-Birthrate/index.html#hockey-data",
|
||
"title": "Does a US Born Players Birthdate affect their shot at the NHL",
|
||
"section": "Hockey Data",
|
||
"text": "Hockey Data\nWhile I wish I could sit and type out how I sat and figured out the complexity of the NHL Stats API and how to bring it into R. In reality I took a great guide, that being Jlaws post, and tweaked what I needed. Instead of Canadian players, I pulled out just the US Born players and their birth dates. I did also pull out positions to see if that will make any sort of difference. What pulls out of the NHL API has a ton of great details and I look forward to diving into what is available to see what kind of graphics can be built.\n08/27/2024 Update Due to the the Coyotes moving to Utah, I had to edit the code slightly to adjust for this. When gathering the active roster data the API was returning a blank response. This was causing Tidyr Hoist to fail because it could not pull the columns from the nested data frame. I added a check to see if the data frame is empty and if it is, then I return an empty data frame and skip this step.\n\nteams <- httr::GET(\"https://api.nhle.com/stats/rest/en/team\") %>%\n httr::content() %>%\n .[[\"data\"]] %>%\n tibble::tibble(data = .) %>%\n tidyr::unnest_wider(data)\n\nget_roster <- function(team){\n df <- httr::GET(glue::glue(\"https://api-web.nhle.com/v1/roster/{team}/20232024\")) %>%\n httr::content() %>%\n purrr::flatten() %>%\n tibble::tibble(data = .)\n\n if (!nrow(df) == 0) {\n df <- df |>\n tidyr::hoist(\n .col = \"data\"\n , \"firstName\" = list(\"firstName\", 1L)\n , \"lastName\" = list(\"lastName\", 1L)\n , \"positionCode\"\n , \"birthDate\"\n , \"birthCountry\"\n )\n }\n return(df)\n}\n\nusa_roster <- purrr::map(teams$triCode, get_roster) %>%\n purrr::list_rbind() %>%\n dplyr::filter(!is.na(firstName)) %>%\n dplyr::filter(birthCountry == \"USA\") %>%\n dplyr::mutate(\n mob = lubridate::month(lubridate::ymd(birthDate), label = TRUE, abbr = FALSE)\n , mob_id = lubridate::month(lubridate::ymd(birthDate))\n ) %>%\n dplyr::count(mob_id, mob, name = \"players\") %>%\n dplyr::mutate(player_pct = players / sum(players))"
|
||
},
|
||
{
|
||
"objectID": "posts/2024-05-15-US-NHL-Birthrate/index.html#graph-it",
|
||
"href": "posts/2024-05-15-US-NHL-Birthrate/index.html#graph-it",
|
||
"title": "Does a US Born Players Birthdate affect their shot at the NHL",
|
||
"section": "Graph It",
|
||
"text": "Graph It\nLets now take a look at the graph. Using the ggimage package we can place nice logos for both the United States and NHL on the graph. This stands out quite nicely versus just using a colored point. Interesting enough the graph seems to show being born early on in the year may mean making the NHL is more likely.\n\nnhl_icon <- \"https://pbs.twimg.com/media/F9sTTAYakAAkRv6.png\"\nusa_icon <- \"https://cdn-icons-png.flaticon.com/512/197/197484.png\"\n\ncombined <- usa_roster %>%\n dplyr::left_join(usa_births, by = c(\"mob\" = \"Month\")) %>%\n dplyr::mutate(\n random = dplyr::case_when(\n mob_id %in% c(4, 6, 9, 11) ~ 30 / 365,\n mob_id %in% c(1, 3, 5, 7, 8, 10, 12) ~ 31 / 365,\n mob_id == 2 ~ 28 / 365\n )\n )\n\n# labels <- combined %>% glue::glue_data(\"{mob} <br> n = {players}\")\n\ng1 <- combined %>%\n ggplot(aes(x = forcats::fct_reorder(mob, -mob_id))) +\n geom_line(aes(y = random, group = 1), linetype = 2, color = \"grey60\") +\n geom_linerange(aes(ymin = country_pct, ymax = player_pct)) +\n geom_image(aes(image = nhl_icon, y = player_pct), size = 0.1) +\n geom_image(aes(image = usa_icon, y = country_pct), size = 0.08) +\n geom_text(aes(label = scales::percent(player_pct, accuracy = .1),\n y = dplyr::if_else(player_pct > country_pct, player_pct + .006, player_pct - .006)), size = 5) +\n geom_text(aes(label = scales::percent(country_pct, accuracy = .1),\n y = dplyr::if_else(country_pct > player_pct, country_pct + .006, country_pct - .006)), size = 5) +\n scale_y_continuous(labels = scales::percent) +\n # scale_x_discrete(labels = labels) +\n coord_flip() +\n labs(\n x = \"Month of Birth\"\n , y = \"Percentage of Births\"\n , title = \"Are United States Born NHL Players More Likely to be Born Early in the Year?\"\n , subtitle = \"Comparing the distribution of birth months between US NHL players and US in general\"\n , caption = glue::glue(\n \"<img src = {nhl_icon} width = '15' height=' 15' /> - US NHL Players Birth Month Distribution <br />\n <img src = {usa_icon} width = '15' height=' 15' /> - US Birth Month (2007-2022) Distribution\"\n )\n ) +\n theme_minimal() +\n theme(\n plot.caption = element_markdown()\n , plot.title.position = \"plot\"\n , text = element_text(size = 16)\n , axis.text = element_markdown()\n )\n\n\ng1\n\n\n\n\n\n\n\n# Stats ----\n\nbroom::tidy(chisq.test(x = combined$players, p = combined$country_pct))\n\n# A tibble: 1 × 4\n statistic p.value parameter method \n <dbl> <dbl> <dbl> <chr> \n1 7.34 0.771 11 Chi-squared test for given probabilities\n\n\nIf we look at this from a more stats based perspective, running a chi square test on the amount of players in the NHL per month, based on the US expected birth rate, we do see however there is quite a high p value. This is lets us know we can not reject the Null hypothesis that these are the same thing."
|
||
}
|
||
] |