This vignette aims to highlight the parallel processing capabilities within eiCompare. Functions that include this option are:
ei_rxc() (only for diagnostic)
Prior to attempting to run these functions in parallel, it is advised you check your computer or server for the following properites:
You should more than 4 physical cores
You should have at least 16 GB of RAM
Building off of existing parallel processing packages such as
doParallel, this package includes parallel processing capabiltiies to speed up ecological inference analyses.
Parallel processing decreases the time needed for your processes by splitting the job amongst your computer’s CPU cores. We recommend 16 GB of RAM so that R can store the data you are currently working on. Furthermore, if you are using multiple cores, the minimum RAM needed is the product of the number of cores you’re using and the size of your data.So if you are working on 4 cores and your dataset is 1 GB, you’ll be using at least 4 GB of RAM.
In order to make this functionality more accessible to users, eiCompare’s functions that include parallel processing include a check for the number of cores you have available to you. If you have less than 4 cores, our functions will not let you proceed with parallelization. Even with exacltly 4 cores, the functions will return a warning that parallelization is not recommended.
There are many resources online if you’d like to learn more about parallelization in general.
In this vignette, we’ll be focusing on
ei_iter() (the functionalities and work flow are described in detail in the Ecological Inference tutorial). We recommend that you review this vignette prior to attempting parallelization for this function.
The data we’ll be using for this example is from 2014 elections in California, specifically looking at voting results and racial demogrphiacs for Corona by precinct.
#> precinct totvote pct_husted pct_spiegel pct_ruth pct_button pct_montanez #> 1 24000 1626 0.11070 0.2091 0.1796 0.1538 0.1599 #> 2 24003 1214 0.10791 0.2257 0.1746 0.1549 0.1746 #> 3 24005 732 0.11475 0.2281 0.1653 0.1352 0.1708 #> 4 24013 1057 0.08988 0.2346 0.1703 0.1183 0.2044 #> 5 24014 1270 0.13150 0.2299 0.1835 0.1260 0.1630 #> 6 24015 595 0.09412 0.2622 0.1580 0.1479 0.1664 #> pct_fox pct_hisp pct_asian pct_white pct_non_lat #> 1 0.1870 0.2483 0.03730 0.7144 0.7517 #> 2 0.1623 0.3296 0.02360 0.6468 0.6704 #> 3 0.1858 0.3604 0.05944 0.5801 0.6396 #> 4 0.1826 0.2364 0.07377 0.6898 0.7636 #> 5 0.1661 0.2752 0.05516 0.6697 0.7248 #> 6 0.1714 0.2959 0.14166 0.5624 0.7041
We have a row for every precinct, if we check the dimensions of our dataset you will see that this is 46. We also have 12 variables included in this dataset.
#>  46 12
#>  "precinct" "totvote" "pct_husted" "pct_spiegel" "pct_ruth" #>  "pct_button" "pct_montanez" "pct_fox" "pct_hisp" "pct_asian" #>  "pct_white" "pct_non_lat"
The variables are as follows: -
precinct: Precinct ID number
totvote: Total number of votes cast
pct_husted: Percent of voting precinct population who voted for Husted
pct_spiegel: Percent of voting precinct population who voted for Spiegel
pct_ruth: Percent of voting precinct population who voted for Ruth
pct_button: Percent of voting precinct population who voted for Button
pct_montanez: Percent of voting precinct population who voted for Montanez
pct_fox: Percent of voting precinct population who voted for Fox
pct_hisp: Percent of voting precinct population who identify as Hispanic
pct_asian: Percent of voting precinct population who identify as Asian
pct_white: Percent of voting precinct population who identify as White
pct_non_lat: Percent of voting precinct population who identify as Non-Latino
Non-Latino encompasses the Asian and White voting population.
#>  TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE #>  TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE #>  TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE #>  TRUE
So for this analysis there are 6 candidates (Husted, Spiegel, Ruth, Button, and Montanez) and 3 racial groups (Hispanic/Latino, Asian, and White). With that, let’s set up the inputs we need for the function and time it to see how long it takes to complete the iterative ei analysis without parallelization.
cand_cols <- c("pct_husted", "pct_spiegel", "pct_ruth", "pct_button", "pct_montanez", "pct_fox") race_cols <- c("pct_hisp", "pct_asian", "pct_white") totals_col <- "totvote" # Run without parallization start_time <- Sys.time() results_test <- ei_iter(corona, cand_cols, race_cols, totals_col) (end_time <- Sys.time() - start_time)
To run in this parallel, all you need to do is toggle
par_compute to be TRUE.
This saves us a about a minute for this specific data set (on the computer we ran this on, but times will vary). With larger datasets and more candidate and racial demographic comparisons, the process will take longer and parallelization will become more beneficial.
Depending on your dataset, the number of races and candidates you’re analyzung, the amount of RAM you have, and the number of physical and logical cores you have the amount of time it takes to run eiCompare functions will differ. Furthermore, its important to uderstand parallelizing requires overhead time and relationships between sample size and run time are not necessarily linear. In the plot below, you’ll be able to see the average run time of a dataset for a sample of 100, 200, and 300 precincts. It is apparent here that less samples does not equate a shorter run time. Nonetheless, parallelization can save multitudes of the ~4 minutes saved here, especially if you repeat function calls for analyses such as a boostrap.
eiCompare provides capabilities to parallelize operations for iterative ei in
ei_iter(), diagnostic tests in
ei_rxc(), and geocoding in
run_geocoder(). By setting the parallelization toggle to
TRUE, as well as having the proper set up with more than 4 cores and more than 16 GB of RAM, the user should be able to run these operations multitudes faster.