Hello There, Guest!  

[OQSPS] Explaining Terrorism Threat Level Across Western Countries

#25
Here's the output (including echoed commands) of my analytic replication. I went over it together with Noah. The only thing that did not replicate was the confidence intervals for Cohen's d values. Perhaps STATA calculates these some other way. It is of minor importance, however.

I will have Noah put my R code in the repository.

So I am satisfied with the replicability of this paper.

Code:
> source('Z:/Projects/Noah Carl terrorism 2016 paper/R/start.R', echo=TRUE)

> ### ANALYTIC REPLICATIONS OF NOAH CARL'S Explaining Terrorism Threat Level Across Western Countries
>
>
> # libs --------------------------------- .... [TRUNCATED]

> p_load(XLConnect, kirkegaard, psych, weights, magrittr, effsize, lsr, compute.es, MASS)

> # data --------------------------------------------------------------------
>
> d_main = readWorksheetFromFile(file = "data/Terrorism Threat.xlsx", .... [TRUNCATED]

> rownames(d_main) = d_main$country

> # missing pop data --------------------------------------------------------
> #if pop data is missing, use eu pop data
> d_main$pop10 = d_main[c("po ..." ... [TRUNCATED]

> # derived variables ------------------------------------------
>
> #dummies
> d_main$west = as.logical(d_main$west)

> d_main$europe = as.logical(d_main$europe)

> #muslim 2015 estimation
> d_main$muslim15 = d_main$muslim10 + .25 * (d_main$muslim30 - d_main$muslim10)

> d_main$muslim15_log = log10(d_main$muslim15 + 1)

> #winzor turkey's value
> d_main["Turkey", "muslim15"] = 19.075

> ## outcome transformations
>
> #casualties
> d_main$casualties = d_main$deaths + d_main$injuries

> d_main$casualties_per_cap = (d_main$casualties / d_main$pop10) * 1e6

> d_main$casualties_per_cap_log = log10(d_main$casualties_per_cap + 1)

> #arrests outcome
> d_main$arrests = df_func(d_main, pattern = "rear", func = function(row, na.rm) {
+   if(all(is.na(row))) return(NA)
+   sum(row,  .... [TRUNCATED]

> d_main$arrests_per_cap = (d_main$arrests / d_main$popeu10) * 1e6

> d_main$arrests_per_cap_log = log10(d_main$arrests_per_cap + 1)

> #var names
> v_terror_measures = c("fco", "casualties_per_cap_log", "arrests_per_cap_log")

> #military deaths
> d_main$mildeaths_log = log10(1 + d_main$mildeaths)

> #log gdp
> d_main$gdp_log = log10(d_main$gdpcap14)

> d_main$gdpeu_log = log10(d_main$gdpeu14)

> #factors
> d_main$any_num = d_main$any

> d_main$any = as.factor(d_main$any)

> d_main$part_num = d_main$part

> d_main$part = as.factor(d_main$part)

> # subsets -----------------------------------------------------------------
>
> d_west = d_main[d_main$west, ]

> d_euro = d_main[d_main$europe, ]

> d_eu = d_main[d_main$eu %>% as.logical(), ]

> d_eu_sub = d_main[d_main$eu %>% as.logical() & !rownames(d_main) %in% c("United Kingdom", "Croatia"), ]

> d_euwest = d_main[d_main$eu | d_main$west, ]

> d_oecd = d_main[!is.na(d_main$unemp14), ]

> d_euro_oecd = d_main[!is.na(d_main$unemp14) & d_main$europe, ]
> source('Z:/Projects/Noah Carl terrorism 2016 paper/R/analyses.R', echo=TRUE)

> # section 2.1 -------------------------------------------------------------
>
> #threat level
> psych::describe(d_west$fco)
  vars  n mean   sd median trimmed  mad min max range skew kurtosis   se
1    1 28 2.43 1.14    2.5    2.42 1.48   1   4     3 0.02    -1.48 0.21

> #casualities
> psych::describe(d_west$casualties_per_cap)
  vars  n mean  sd median trimmed mad min   max range skew kurtosis   se
1    1 28 4.11 9.7    0.2    1.93 0.3   0 44.11 44.11 2.86     8.15 1.83

> #arrests
> psych::describe(d_eu_sub$arrests_per_cap) #notice all data sample
  vars  n mean   sd median trimmed  mad min   max range skew kurtosis   se
1    1 26 1.98 2.62   0.95    1.53 1.41   0 11.12 11.12 1.86     3.36 0.51

> #intercors
> cor.test(d_euwest$fco, d_euwest$casualties_per_cap_log)

    Pearson's product-moment correlation

data:  d_euwest$fco and d_euwest$casualties_per_cap_log
t = 4.9694, df = 33, p-value = 2.019e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4104835 0.8107262
sample estimates:
      cor
0.6542357


> cor.test(d_euwest$fco, d_euwest$arrests_per_cap_log)

    Pearson's product-moment correlation

data:  d_euwest$fco and d_euwest$arrests_per_cap_log
t = 5.2074, df = 24, p-value = 2.46e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4750103 0.8701950
sample estimates:
      cor
0.7283488


> cor.test(d_euwest$arrests_per_cap_log, d_euwest$casualties_per_cap_log)

    Pearson's product-moment correlation

data:  d_euwest$arrests_per_cap_log and d_euwest$casualties_per_cap_log
t = 5.1142, df = 24, p-value = 3.114e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4647961 0.8669761
sample estimates:
      cor
0.7221351


> #scatterplots
> GG_scatter(d_euwest, x_var = "casualties_per_cap_log", y_var = "fco")
Loading required package: multilevel
Loading required package: nlme

Attaching package: ‘psychometric’

The following object is masked from ‘package:ggplot2’:

    alpha

The following object is masked from ‘package:psych’:

    alpha


> GG_scatter(d_euwest, x_var = "arrests_per_cap_log", y_var = "fco")

> GG_scatter(d_euwest, x_var = "arrests_per_cap_log", y_var = "casualties_per_cap_log")

> # section 2.2 -------------------------------------------------------------
>
> #muslims
> psych::describe(d_west$muslim15)
  vars  n mean   sd median trimmed  mad min max range skew kurtosis   se
1    1 28 3.12 2.58   2.52       3 3.45   0 8.2   8.2 0.31    -1.38 0.49

> psych::describe(d_eu_sub$muslim15)
  vars  n mean   sd median trimmed  mad min  max range skew kurtosis   se
1    1 26 3.88 5.15   2.35    2.92 3.34   0 22.7  22.7 2.05     4.54 1.01

> #western deaths
> table(d_west$any)

0  1
7 21

> table(d_west$any) %>% prop.table()

   0    1
0.25 0.75

> psych::describe(d_west$mildeaths)
  vars  n   mean      sd median trimmed   mad min  max range skew kurtosis     se
1    1 28 293.86 1295.96     11   29.83 16.31   0 6878  6878 4.68     20.8 244.91

> #western isis
> table(d_west$part)

0  1
21  7

> table(d_west$part) %>% prop.table()

   0    1
0.75 0.25

> #eu subsample deaths
> table(d_eu_sub$any)

0  1
7 19

> table(d_eu_sub$any) %>% prop.table()

        0         1
0.2692308 0.7307692

> psych::describe(d_eu_sub$mildeaths)
  vars  n  mean    sd median trimmed  mad min max range skew kurtosis   se
1    1 26 20.77 27.19    6.5   16.95 9.64   0  86    86 1.08    -0.23 5.33

> #eu sub against isis
> table(d_eu_sub$part)

0  1
22  4

> table(d_eu_sub$part) %>% prop.table()

        0         1
0.8461538 0.1538462

> # section 3.1 ---------------------------------------------------------------------
> cor.test(d_west$fco, d_west$muslim15_log)

    Pearson's product-moment correlation

data:  d_west$fco and d_west$muslim15_log
t = 4.755, df = 26, p-value = 6.417e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4143524 0.8410761
sample estimates:
      cor
0.6820019


> lm("fco ~ poly(muslim15_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "fco ~ poly(muslim15_log, 2)", data = d_west)

Residuals:
     Min       1Q   Median       3Q      Max
-2.21732 -0.21609 -0.08645  0.65262  1.51373

Coefficients:
                       Estimate Std. Error t value Pr(>|t|)    
(Intercept)              2.4286     0.1625  14.948 5.67e-14 ***
poly(muslim15_log, 2)1   4.0265     0.8597   4.684 8.46e-05 ***
poly(muslim15_log, 2)2  -0.4081     0.8597  -0.475    0.639    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.8597 on 25 degrees of freedom
Multiple R-squared:  0.4699,    Adjusted R-squared:  0.4275
F-statistic: 11.08 on 2 and 25 DF,  p-value: 0.0003585


> cohen.d(d = d_west$fco, f = d_west$any)

Cohen's d

d estimate: -0.6888248 (medium)
95 percent confidence interval:
       inf        sup
-1.6418342  0.2641846

> cor.test(d_west$fco, d_west$mildeaths_log)

    Pearson's product-moment correlation

data:  d_west$fco and d_west$mildeaths_log
t = 2.1992, df = 26, p-value = 0.03696
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.02693856 0.67010334
sample estimates:
      cor
0.3960353


> lm("fco ~ poly(mildeaths_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "fco ~ poly(mildeaths_log, 2)", data = d_west)

Residuals:
     Min       1Q   Median       3Q      Max
-1.76252 -0.87979  0.09177  0.88862  1.96319

Coefficients:
                        Estimate Std. Error t value Pr(>|t|)    
(Intercept)               2.4286     0.2048  11.856 9.29e-12 ***
poly(mildeaths_log, 2)1   2.3382     1.0839   2.157   0.0408 *  
poly(mildeaths_log, 2)2  -0.1427     1.0839  -0.132   0.8963    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.084 on 25 degrees of freedom
Multiple R-squared:  0.1574,    Adjusted R-squared:  0.09002
F-statistic: 2.336 on 2 and 25 DF,  p-value: 0.1175


> SMD_matrix(d_west$fco, d_west$part)

Attaching package: ‘plyr’

The following objects are masked from ‘package:Hmisc’:

    is.discrete, summarize

          0         1
0        NA -1.632013
1 -1.632013        NA

> cohen.d(d = d_west$fco, f = d_west$part)

Cohen's d

d estimate: -1.632013 (large)
95 percent confidence interval:
       inf        sup
-2.6807130 -0.5833124

> #scatterplots
> GG_scatter(d_west, x_var = "muslim15_log", y_var = "fco")

> GG_scatter(d_euwest, x_var = "mildeaths_log", y_var = "fco")

> #table 1
> d_betas = MOD_APSLM(dependent = "fco", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_log", "unemp14", "ineq ..." ... [TRUNCATED]

Attaching package: ‘gtools’

The following object is masked from ‘package:psych’:

    logit


> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911   AIC   BIC   r2 r2.adj.    R R.adj.  N
8           0.72    0.37            NA       NA      NA      NA       NA 60.84 66.17 0.60    0.57 0.77   0.75 28
9           0.68      NA          0.39       NA      NA      NA       NA 59.51 64.84 0.62    0.59 0.79   0.77 28
10          0.53      NA            NA     0.39      NA      NA       NA 61.39 66.72 0.59    0.56 0.77   0.75 28
108         0.59    0.43            NA       NA    0.28    0.33     0.19 54.85 64.18 0.74    0.68 0.86   0.82 28
112         0.58      NA          0.45       NA    0.19    0.37     0.00 57.86 67.19 0.71    0.64 0.84   0.80 28
113         0.47      NA            NA     0.41    0.11    0.30     0.16 57.95 67.28 0.71    0.64 0.84   0.80 28

> # section 3.2 -------------------------------------------------------------
>
> cor.test(d_west$casualties_per_cap_log, d_west$muslim15_log)

    Pearson's product-moment correlation

data:  d_west$casualties_per_cap_log and d_west$muslim15_log
t = 2.4505, df = 26, p-value = 0.0213
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.07166936 0.69407601
sample estimates:
      cor
0.4331643


> lm("casualties_per_cap_log ~ poly(muslim15_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "casualties_per_cap_log ~ poly(muslim15_log, 2)",
    data = d_west)

Residuals:
     Min       1Q   Median       3Q      Max
-0.52171 -0.27188 -0.08152 -0.00535  1.32899

Coefficients:
                       Estimate Std. Error t value Pr(>|t|)  
(Intercept)             0.32293    0.08716   3.705  0.00105 **
poly(muslim15_log, 2)1  1.11005    0.46121   2.407  0.02381 *
poly(muslim15_log, 2)2  0.13051    0.46121   0.283  0.77952  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4612 on 25 degrees of freedom
Multiple R-squared:  0.1902,    Adjusted R-squared:  0.1254
F-statistic: 2.936 on 2 and 25 DF,  p-value: 0.07154


> cohen.d(d = d_west$casualties_per_cap_log, f = d_west$any)

Cohen's d

d estimate: -0.533525 (medium)
95 percent confidence interval:
       inf        sup
-1.4777792  0.4107291

> cor.test(d_west$casualties_per_cap_log, d_west$mildeaths_log)

    Pearson's product-moment correlation

data:  d_west$casualties_per_cap_log and d_west$mildeaths_log
t = 2.2741, df = 26, p-value = 0.03145
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.04036461 0.67744253
sample estimates:
      cor
0.4073078


> lm("casualties_per_cap_log ~ poly(mildeaths_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "casualties_per_cap_log ~ poly(mildeaths_log, 2)",
    data = d_west)

Residuals:
     Min       1Q   Median       3Q      Max
-0.38671 -0.23202 -0.18360 -0.00793  1.30405

Coefficients:
                        Estimate Std. Error t value Pr(>|t|)    
(Intercept)               0.3229     0.0859   3.759 0.000917 ***
poly(mildeaths_log, 2)1   1.0438     0.4546   2.296 0.030316 *  
poly(mildeaths_log, 2)2   0.5588     0.4546   1.229 0.230409    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4546 on 25 degrees of freedom
Multiple R-squared:  0.2134,    Adjusted R-squared:  0.1505
F-statistic: 3.392 on 2 and 25 DF,  p-value: 0.04973


> cohen.d(d = d_west$casualties_per_cap_log, f = d_west$part)

Cohen's d

d estimate: -1.455832 (large)
95 percent confidence interval:
       inf        sup
-2.4815849 -0.4300796

> #scatterplots
> GG_scatter(d_west, x_var = "muslim15_log", y_var = "casualties_per_cap_log")

> GG_scatter(d_euwest, x_var = "mildeaths_log", y_var = "casualties_per_cap_log")

> #table 2
> d_betas = MOD_APSLM(dependent = "casualties_per_cap_log", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_log ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911   AIC   BIC   r2 r2.adj.    R R.adj.  N
8           0.46    0.28            NA       NA      NA      NA       NA 77.82 83.15 0.27    0.21 0.51   0.45 28
9           0.43      NA          0.40       NA      NA      NA       NA 74.35 79.67 0.35    0.30 0.59   0.55 28
10          0.26      NA            NA     0.45      NA      NA       NA 74.06 79.39 0.36    0.31 0.60   0.55 28
108         0.37    0.30            NA       NA    0.19    0.21     0.22 79.56 88.89 0.37    0.23 0.61   0.47 28
112         0.35      NA          0.44       NA    0.16    0.30     0.01 77.56 86.89 0.41    0.28 0.64   0.53 28
113         0.21      NA            NA     0.46    0.09    0.24     0.15 75.91 85.24 0.45    0.32 0.67   0.57 28

> # section 3.3 -------------------------------------------------------------
>
> cor.test(d_eu_sub$arrests_per_cap_log, d_eu_sub$muslim15_log)

    Pearson's product-moment correlation

data:  d_eu_sub$arrests_per_cap_log and d_eu_sub$muslim15_log
t = 4.1268, df = 24, p-value = 0.0003823
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.3423411 0.8255832
sample estimates:
    cor
0.64426


> lm("arrests_per_cap_log ~ poly(muslim15_log, 2)", data = d_eu_sub) %>% summary()

Call:
lm(formula = "arrests_per_cap_log ~ poly(muslim15_log, 2)", data = d_eu_sub)

Residuals:
     Min       1Q   Median       3Q      Max
-0.51482 -0.14481 -0.07429  0.18269  0.49859

Coefficients:
                       Estimate Std. Error t value Pr(>|t|)    
(Intercept)              0.3516     0.0495   7.104 3.09e-07 ***
poly(muslim15_log, 2)1   1.0233     0.2524   4.054 0.000492 ***
poly(muslim15_log, 2)2  -0.1017     0.2524  -0.403 0.690613    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.2524 on 23 degrees of freedom
Multiple R-squared:  0.4192,    Adjusted R-squared:  0.3687
F-statistic: 8.299 on 2 and 23 DF,  p-value: 0.001934


> cohen.d(d = d_eu_sub$arrests_per_cap_log, f = d_eu_sub$any)

Cohen's d

d estimate: -0.3896577 (small)
95 percent confidence interval:
       inf        sup
-1.3471069  0.5677916

> cor.test(d_eu_sub$arrests_per_cap_log, d_eu_sub$mildeaths_log)

    Pearson's product-moment correlation

data:  d_eu_sub$arrests_per_cap_log and d_eu_sub$mildeaths_log
t = 1.5137, df = 24, p-value = 0.1432
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.1040370  0.6125213
sample estimates:
      cor
0.2952116


> lm("arrests_per_cap_log ~ poly(mildeaths_log, 2)", data = d_eu_sub) %>% summary()

Call:
lm(formula = "arrests_per_cap_log ~ poly(mildeaths_log, 2)",
    data = d_eu_sub)

Residuals:
     Min       1Q   Median       3Q      Max
-0.51633 -0.25960  0.00144  0.19440  0.62486

Coefficients:
                        Estimate Std. Error t value Pr(>|t|)    
(Intercept)              0.35164    0.06083   5.781 6.88e-06 ***
poly(mildeaths_log, 2)1  0.46890    0.31017   1.512    0.144    
poly(mildeaths_log, 2)2  0.30038    0.31017   0.968    0.343    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.3102 on 23 degrees of freedom
Multiple R-squared:  0.1229,    Adjusted R-squared:  0.04665
F-statistic: 1.612 on 2 and 23 DF,  p-value: 0.2213


> cohen.d(d = d_eu_sub$arrests_per_cap_log, f = d_eu_sub$part)

Cohen's d

d estimate: -2.365502 (large)
95 percent confidence interval:
       inf        sup
-3.7444042 -0.9865992

> #scatterplots
> GG_scatter(d_eu_sub, x_var = "muslim15_log", y_var = "arrests_per_cap_log")

> GG_scatter(d_eu_sub, x_var = "mildeaths_log", y_var = "arrests_per_cap_log")

> #table 3
> d_betas = MOD_APSLM(dependent = "arrests_per_cap_log", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdpeu_log" .... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdpeu_log uneu14 ineu14   AIC   BIC   r2 r2.adj.    R R.adj.  N
8           0.72    0.34            NA       NA        NA     NA     NA 61.33 66.37 0.53    0.49 0.73   0.70 26
9           0.66      NA          0.33       NA        NA     NA     NA 61.57 66.60 0.52    0.48 0.72   0.69 26
10          0.46      NA            NA     0.49        NA     NA     NA 55.81 60.85 0.62    0.58 0.79   0.76 26
108         0.69    0.38            NA       NA      0.05   0.13  -0.19 65.42 74.22 0.56    0.45 0.75   0.67 26
112         0.63      NA          0.36       NA      0.01   0.13  -0.21 65.64 74.45 0.56    0.45 0.75   0.67 26
113         0.45      NA            NA     0.52     -0.10   0.11  -0.06 60.46 69.26 0.64    0.55 0.80   0.74 26

> # appendix B --------------------------------------------------------------
>
> #table B1
> d_betas = MOD_APSLM(dependent = "fco", predictors = c(" ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911   AIC   BIC   r2 r2.adj.    R R.adj.  N
8           0.68    0.26            NA       NA      NA      NA       NA 73.39 79.49 0.59    0.56 0.77   0.75 34
9           0.69      NA          0.33       NA      NA      NA       NA 69.47 75.57 0.63    0.61 0.80   0.78 34
10          0.57      NA            NA     0.33      NA      NA       NA 71.53 77.64 0.61    0.58 0.78   0.76 34
108         0.55    0.36            NA       NA    0.40    0.32     0.37 62.19 72.88 0.75    0.71 0.87   0.84 34
112         0.61      NA          0.31       NA    0.27    0.28     0.23 65.17 75.86 0.73    0.68 0.85   0.82 34
113         0.48      NA            NA     0.35    0.27    0.33     0.22 65.63 76.31 0.72    0.68 0.85   0.82 34

> #table B2
> d_betas = MOD_APSLM(dependent = "fco", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_log", "unemp14", "ine ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911   AIC   BIC   r2 r2.adj.    R R.adj.  N
8           0.76    0.37            NA       NA      NA      NA       NA 48.48 53.19 0.67    0.64 0.82   0.80 24
9           0.71      NA          0.34       NA      NA      NA       NA 49.77 54.48 0.65    0.62 0.81   0.79 24
10          0.60      NA            NA     0.26      NA      NA       NA 53.82 58.53 0.59    0.55 0.77   0.74 24
108         0.65    0.43            NA       NA    0.21    0.32     0.14 44.05 52.30 0.79    0.73 0.89   0.85 24
112         0.61      NA          0.41       NA    0.20    0.38     0.01 47.38 55.62 0.75    0.69 0.87   0.83 24
113         0.50      NA            NA     0.34    0.09    0.29     0.15 51.42 59.67 0.71    0.63 0.84   0.79 24

> # appendix C --------------------------------------------------------------
>
> #table C1
> d_betas = MOD_APSLM(dependent = "casualties_per_cap_log ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911   AIC    BIC   r2 r2.adj.    R R.adj.  N
8           0.59    0.20            NA       NA      NA      NA       NA 84.94  91.04 0.42    0.38 0.65   0.62 34
9           0.60      NA          0.17       NA      NA      NA       NA 85.50  91.61 0.41    0.37 0.64   0.61 34
10          0.56      NA            NA     0.14      NA      NA       NA 86.18  92.29 0.40    0.36 0.63   0.60 34
108         0.54    0.27            NA       NA    0.12    0.07     0.32 86.25  96.93 0.49    0.40 0.70   0.64 34
112         0.59      NA          0.16       NA    0.03    0.04     0.22 88.72  99.40 0.46    0.36 0.68   0.60 34
113         0.54      NA            NA     0.13    0.04    0.06     0.22 89.50 100.19 0.44    0.34 0.67   0.59 34

> #table C2
> d_betas = MOD_APSLM(dependent = "casualties_per_cap_log", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_lo ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911   AIC   BIC   r2 r2.adj.    R R.adj.  N
8           0.53    0.29            NA       NA      NA      NA       NA 65.18 69.89 0.34    0.28 0.58   0.52 24
9           0.48      NA          0.31       NA      NA      NA       NA 64.83 69.55 0.35    0.29 0.59   0.53 24
10          0.31      NA            NA     0.38      NA      NA       NA 64.24 68.95 0.36    0.30 0.60   0.55 24
108         0.47    0.32            NA       NA    0.11    0.19     0.13 68.94 77.18 0.40    0.23 0.63   0.48 24
112         0.42      NA          0.34       NA    0.11    0.25     0.02 68.92 77.16 0.40    0.23 0.63   0.48 24
113         0.23      NA            NA     0.45    0.07    0.22     0.14 66.95 75.20 0.45    0.29 0.67   0.54 24

> # appendix D --------------------------------------------------------------
>
> #general terrorism score
> pca_terror = principal(d_eu_sub[v_terror .... [TRUNCATED]

> d_eu_sub$pca_terror = pca_terror$scores %>% as.vector()

> #table D1
> d_betas = MOD_APSLM(dependent = "pca_terror", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdpeu_log", "uneu1 ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
    muslim15_log any_num mildeaths_log part_num gdpeu_log uneu14 ineu14   AIC   BIC   r2 r2.adj.    R R.adj.  N
8           0.77    0.34            NA       NA        NA     NA     NA 57.86 62.89 0.59    0.55 0.77   0.74 26
9           0.71      NA          0.31       NA        NA     NA     NA 58.80 63.84 0.57    0.53 0.76   0.73 26
10          0.54      NA            NA     0.39        NA     NA     NA 56.44 61.47 0.61    0.57 0.78   0.76 26
108         0.67    0.43            NA       NA      0.24   0.28  -0.10 58.10 66.90 0.67    0.58 0.82   0.76 26
112         0.60      NA          0.38       NA      0.19   0.27  -0.13 60.26 69.07 0.64    0.55 0.80   0.74 26
113         0.45      NA            NA     0.45      0.07   0.24   0.01 59.15 67.95 0.65    0.57 0.81   0.75 26

> # extra -------------------------------------------------------------------
>
> #factor analysis
> fa_terror = fa(d_west[v_terror_measures])

> d_west$fa_terror = fa_terror$scores %>% as.vector()

> #lasso
> d_betas_lasso = MOD_LASSO(df = d_west, dependent = "fa_terror", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp ..." ... [TRUNCATED]
Loading required package: Matrix
Loading required package: foreach
foreach: simple, scalable parallel programming from Revolution Analytics
Use Revolution R for scalability, fault tolerance and more.
http://www.revolutionanalytics.com
Loaded glmnet 2.0-5

Missing data removed.
Data standardized.

> MOD_summarize_models(d_betas_lasso)
                muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911
mean                   0.055       0             0    0.014       0       0        0
median                 0.060       0             0    0.000       0       0        0
sd                     0.053       0             0    0.024       0       0        0
mad                    0.089       0             0    0.000       0       0        0
fraction_zeroNA        0.412       1             1    0.648       1       1        1
There were 50 or more warnings (use warnings() to see the first 50)

 Reply
 
Messages In This Thread
[OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Apr-27, 12:23:00
Review - by Emil - 2016-Apr-27, 14:24:14
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Apr-28, 15:55:00
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Apr-28, 18:17:21
Analytic replication - by Emil - 2016-May-02, 02:05:48
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-May-02, 16:00:01
Review #2 - by Emil - 2016-May-02, 02:54:57
Reply to Noah #7 - by Emil - 2016-May-02, 21:51:15
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-May-03, 18:52:46
Review #3 - by Emil - 2016-May-03, 19:05:39
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-May-03, 21:01:20
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Chuck - 2016-May-12, 02:54:13
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-May-12, 18:59:35
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Chuck - 2016-May-12, 21:28:17
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Chuck - 2016-May-25, 01:40:16
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-May-25, 15:51:52
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Emil - 2016-May-03, 21:05:22
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Jun-01, 16:25:52
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Chuck - 2016-Jun-02, 04:39:35
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Emil - 2016-Jun-02, 13:17:15
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Jun-02, 15:29:20
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Emil - 2016-Jun-02, 16:07:09
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Jun-03, 16:03:21
Review #3 - by Emil - 2016-Jun-05, 02:15:22
Analytic replication #2 - by Emil - 2016-Jun-10, 17:59:12
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Jun-10, 20:48:04
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Emil - 2016-Jun-10, 21:47:25
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by ljzigerell - 2016-Jun-15, 05:27:15
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Jun-17, 18:24:53
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by ljzigerell - 2016-Jun-17, 21:14:31
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Jun-21, 18:37:11
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Chuck - 2016-Jun-24, 00:55:58
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Emil - 2016-Jun-24, 08:54:45
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by Emil - 2016-Jun-28, 17:32:01
RE: [OQSPS] Explaining Terrorism Threat Level Across Western Countries - by NoahCarl - 2016-Nov-14, 19:36:01
 
Forum Jump:

Users browsing this thread: 2 Guest(s)