for a data with heavy tail, this function could be used for getting the aligned rank transformed data and implementing Analysis of variance and mixed effect models

This is an approach used when the data has non-normal distribution

function Raligned = Aligned_Rank_Transform(Arg) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % INPUT: Arg: Arg should be a n * 3 matrix % columns of Arg: % ------------------ 1- first categorical variable % ------------------ 2- 2nd categorical variable % ------------------ 3- dependent variable % This function is used to do aligned rank transform for a data with % heavy tail --> TO SEE FIXED EFFECT AND INTERACTION EFFECT % using Analysis of variance.. % This is an approach used when the data has non-normal distribution % Reference: % Wobbrock, J.O., Findlater, L., Gergle, D. and Higgins, J.J. (2011). % The Aligned Rank Transform for nonparametric factorial analyses using only ANOVA procedures. % Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '11). % Vancouver, British Columbia (May 7-12, 2011). New York: ACM Press, pp. 143-146. Honorable Mention Paper. % http://faculty.washington.edu/wobbrock/pubs/chi-11.06.pdf %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % by: AbuAli Amin % www.aminbros.com %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% r1 = Arg(:,1); r2 = Arg(:,2); R = Arg(:,3); q1 = unique(r1); q2 = unique(r2); if isnumeric(q1) q1 = cellstr(num2str(q1));r1 = cellstr(num2str(r1)); end if isnumeric(q2) q2 = cellstr(num2str(q2));r2 = cellstr(num2str(r2)); end Z = zeros(size(R)); Yaligned1 = zeros(size(R)); Yaligned2 = zeros(size(R)); Yaligned3 = zeros(size(R)); mu = nanmean(R); for i = 1:size(q1,1) ME1 = nanmean(R(strcmp(r1,q1{i}))) - mu; for j = 1:size(q2,1) Z(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) = R(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) - nanmean(R(strcmp(r1,q1{i}) & strcmp(r2,q2{j}))); ME2 = nanmean(R(strcmp(r2,q2{j}))) - mu; ME12 = nanmean(R(strcmp(r1,q1{i}) & strcmp(r2,q2{j}))) - mu; MEI = ME12 - ME1 - ME2 + mu; Yaligned1(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) = Z(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) + ME1; Yaligned2(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) = Z(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) + ME2; Yaligned3(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) = Z(strcmp(r1,q1{i}) & strcmp(r2,q2{j})) + MEI; end end R1 = tiedrank(Yaligned1); R2 = tiedrank(Yaligned2); R3 = tiedrank(Yaligned3); Raligned = [R1,R2,R3];]]>

This is clearly an issue. I propose two solution for it.

1. Turn off browser scroll

Actually i mean do not put content more that screen size. Which results no scroll feature.

2. Set your content height more than `window.innerHeight`.

Here’s the code to calculate it somewhat accurately.

// compats holds data about compatibility stuff compats.start_with_resize_onscroll = ua.indexOf("Safari") != -1 && ua.indexOf("Mobile") != -1; compats.start_with_resize_onscroll_CriOS = // Chrome iOS browser compats.start_with_resize_onscroll && ua.indexOf("CriOS") != -1; compats.start_with_resize_onscroll_full_window_height = function() { var least_nav_height = compats.start_with_resize_onscroll_CriOS ? 0 : 38; if(isWindowLandscape()) return window.screen.width - least_nav_height; else return window.screen.height - least_nav_height; }

You might also struggling with another issue related to display fixed position contents. Click here

]]>

It depends on version of iOS. I did not test it with older version. tested with iOS 9.3

/* css code */ div.mydiv { transform: translate3d(0px, 0px, 0px); }

Also replacing canvas instead of the element fixes the issue. Tested by replacing image with canvas…..

You might also experiencing another issue related to this. Click here

]]>Embed program data within executable. Text files as c string and binary files as c array.

file2var is easy to use program to do just that.

For example we need to collect glsl program shaders in one c file .h and .c , clanggen can be used to do exactly that.

```
$ ./file2var txt2str -p c -o ../src/program *.glsl
output files are program.h and program.c
--- c file
char myshader_glsl[] = "......";
--- header file
extern char *myshader_glsl;
--- in your code
#include "program.h"
int main(...) {
...
glShaderSource(&shader, 1, &myshader_glsl, NULL);
...
}
```

]]>The function is easy to use, the user only requires to enter the vector of the items, their labels and then run it. it will do the analysis fast and print the result in well arranged tables. using it makes the reliability analysis and scale parameters much more easier than using SPSS. while using SPSS is easy too. but due to automatically removing items and printing the results in the well numbered tables and specifying each step, makes the analysis and interpreting the results very easy.

here is a document file that shows the results of the reliability analysis by running this function to find the cronbach’s alpha.

]]>

The formula for Value at risk, uses volatility, skewness and excess kurtosis and the number of trading periods in the recommended holding period to find the value at risk.

Then the code uses the formula of moments to find 4 moments and then calculating the value of volatility, skewness and excess kurtosis. according to the formula’s presented below:

and then the code will use the recommended holding period T to find VaR equivalence volatility:

After finding the annualized volatility the code does the Monte Carlo repetition and calculates the annualized volatility for all existing periods.

Then the code sorts the data in ascending order and will gets the 97.5% percentile value at risk. after that the code will checks the following conditional statements to find the MRM class of the data.

the code also checks the PRIIP category 2 value at 10% percentile, 50% percentile and 90% percentile to categorize the stock value at risk as, unfavorable, moderate and favorable respectively.

below, we could see a screenshot taken from the code:

]]>

below there are two short videos that shows this random movement with different speeds and bouncing back of the marker at each boundary of the area.

the second video is the marker with higher movement speed.

]]>at is level, bt is the trend and Ft is seasonality. alpha, beta and gamma are the damping ratio in appropriate with level, trend and seasonality respectively.

this is Holt – winter method to do exponential smoothing for the data which include seasonality.

finding the values of alpha, beta and gamma will be done by considering an objective.

for this reason, we can have our own objective for estimating these ratios, otherwise, we can estimate them by minimizing the followings:

MSE: Mean squared error.

MAE: Mean absolute error.

MAPE: MEAN absolute percentage error.

if we want to minimize one of these objectives using excel. we could use solver.

the following formula is used to find the forecast after m steps.

at is level in period t, bt is trend in period t.

Ft+m-s: is the seasonality in period t + m – s.

for example if we want to calculate the forecast at 1 next step, we will have:

Forecast(t+1) = (at + bt) * Ft+1-s

s in the above formula is the number of points that each season took. for example if our data has repeated fluctuations after 3 points. then s = 3.

if our data is based on monthly seasonality : s =12.

The image below is an example of holt-winter method which is done for calculating the exponential smoothing and doing forecast using excel.

]]>there are two grouping variable. one is gender and the other one is age and we want to see if these two categorical variable have significant effect on duration of watching tv or not. age is categorized in two groups. one is less than 18 and one is more than 18. in the tutorial anova2 test and post hoc (multcompare test) was performed to see the effect of each grouping variable.

ofcourse post hoc test wasn’t require for this data, since it has only two groups for age. post hoc is used when there are more groups in one of our categorical variable to see which group is significantly differ from the base line.

]]>