UCLChangeMakers-StatisticalVisualisation-RShiny
Population
Sample
Quantile Plots and Skewness
Maximum Likelihood Estimation
Definition
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of
a statistical model,given observations.The method obtains the parameter estimates by finding the parameter
values that maximize the likelihood function.The estimates are called maximum likelihood estimates,
which is also abbreviated as MLE.
Step1

Step2

Step3

Step4

General linear model
Visualisation on the selected features
Generalised Additive model
Plots of GAM
Survival Analysis
Kaplan-Meier Estimate basics
DataSet info
DataSet Summary
plot
select Covariate
Brownian Motion is cool!
Plots on Brownian Motion
$$\text{GBM Model: } S_0 \exp\left(\left(\mu - \frac{\sigma^2}{2}\right)t + \sigma W_t\right) $$
To run the simulation you have to enter the following inputs on the side bar:
Initial Stock Price is the current price of the stock;
Drift rate is the expected rate of return;
Yearly Standard Deviation is the volatility of the stock price;
Number of Simulation represents how many simulation of stock price you want to display;
In the side bar is also possible, through a check box, to set the seed to a fix value. Please mark the check box and select the value from the numeric box. If it is unmarked the seed will be assigned randomly. As the calculation time increases with the number of simulation, there is a 'Submit' button to click as soon as the parameters are decided.
Import your dataset(Please wrangle your dataset first)
Explore decision tree!
Accuracy
Sensitivity/Recall
Specificity
Precision
Welcome to the forest!
Key Knowledge
Simple sketch of Random Forest
Random Forest
Consider building a random forest consisting of N trees(for k=1,2,..,N)
Step One
Generate bootstrap sample Xk
Step Two
Build a decision tree based on that sample
Step Three
From all the features(e.g. T variables) prepared to train the (entire) random forest model, randomly select M of them to feed into this decision tree
Step Four
Select the first splitting, searching over all randomly chosen features, choosing the one maximizing information gain (e.g. entropy/gini method, more details to be added)
Step Five
Repeat this process until the sample is exhausted under the restriction given(e.g. maximum level of tree)
Step Six
Get N classifiers from N decision trees, and then average them to get the final classifier
Key in instructions:
E.g.:for objective function z = x1 + 2 x2: key in '1,2';
for constraints x1 + 2 x2 <= 3, 2 x1 + 3x2 <= 5: key in '1,2,3,2,3,5'