Econometric Analysis Of Bitcoin Versus Euro

downloadDownload
  • Words 2732
  • Pages 6
Download PDF

Introduction

In this chapter, the relationship between Bitcoin and three indicators of Oil, Gold and also Euro will be discussed. The study of the impact of oil, euro and gold on bitcoin and also vice versa will be investigated via Granger causality and modified Dickey Fuller Test.

Firstly it is better to describe the Granger causality Model and then, modified Dickey Fuller Test.

Click to get a unique essay

Our writers can write you a new plagiarism-free essay on any topic

Granger causality is a way to look into causality among variables in a time collections and serieses. The approach is a probabilistic account of causality; it uses empirical records sets to discover the patterns of the correlation.

Causality is associated with the concept of reason-and-impact, even though it isn’t precisely the identical. A variable X is causal to variable Y if X is the reason of Y or Y is the motive of X. But, with Granger causality, you aren’t testing a true cause-and-effect connections; what you need to realize is if a particular variable comes before another within the time collection.

In different phrases, if you discover Granger causality on your records there isn’t a causal link in the actual sense of the phrase. Be aware: when econometricians say “cause,” what they suggest is “Granger-reason,” despite the fact that a greater suitable word might be “precedence”. Granger causality is a “backside up” process, where the belief is that the data-producing methods in any time collection are independent variables; then the records sets are analyzed to peer if they’re correlated. The other is a “top down” method which assumes the procedures aren’t independent; the records units are then analyzed to see if they may be generated independently from every different.

The null speculation for the take a look at is that lagged x-values do now not explain the version in y. In different words, it assumes that x (t) doesn’t Granger-purpose y (t). Theoretically, you could run the Granger take a look at to find out if two variables are associated at a right away moment in time. However, that model of the test is seldom used as it’s now not very useful, so I’ve now not blanketed the stairs right here.

The process can get complicated due to the big number of alternatives, including selecting from a fixed of equations for the f-fee calculations. You can bypass the vast majority of the intermediate steps by the usage of software program. The Granger causality test is a part of many famous economics software program applications, consisting of E-perspectives and computer-deliver. Any number of lags may be selected with a few clicks.

Make sure that some time series is desk bound earlier than proceeding. Data ought to be transformed to eliminate the possibility of autocorrelation. You should also make certain your version doesn’t have any unit roots, as those will skew the check outcomes.

The basic steps for running the program is mentioned following:

State the null hypothesis and alternate hypothesis. For instance, y (t) doesn’t Granger-cause x (t).

Choose lags. This largely depends on what quantity information you have got out there. A technique to decide on lags i and j are to run a model order take a look at (i.e. use a model order choice method). It would be easier simply to select many values and run the farmer take a look at several times to work out if the results are the identical for various lag levels. The results shouldn’t be sensitive to lags.

Find the f-value. 2 equations may be wont to notice if βj = zero for all lags j:

Like above equation, these equations investigate to see whether y(t) Granger-causes x(t):

If you have a large number of variables and lags, your F-test can lose power. An alternative would be to run a chi-square test, developed with possibility ratio or Wald tests. Although both variations supply virtually the equal result, the F-test is lots simpler to run.

In data and econometrics, the Dickey Fuller test is looking for a unit root in a monetary time collection samples. It was developed with the aid of Elliott, Rothenberg and Stock (ERS) in 1992 as a change of the augmented Dickey–Fuller test (ADF).

A unit root test determines whether a time series variable is non-stationary, the usage of an autoregressive model. For collection featuring deterministic factors in the structure of a constant or a linear style, then ERS developed an asymptotically point optimum take a look at to recognise the unit root.

This testing method dominates different current unit root which checks in terms of power. It domestically de-trends (de-means) information series to efficiently estimate the deterministic parameters of the series, and use the modified records to operate a usual ADF unit root test. This process helps to cast off the means and linear traits for sequence that are now not some distance from the non-stationary region.

Experimental

Statistics and data of this investigation

For this propose, weekly data from 2016 to 2019 has been used. As it mentioned before (chapter 3), on July 2016, bitcoin fluctuate around a good and stable market price and managed to stay between the $200-1000 level and From July 2017 onwards there was a steady increase in the market value of bitcoin, while from 2017 up to 2018, Bitcoin value has decresed.

Research methodology

In this study, to investigate the question of whether oil, Gold and also Euro prices could help to predict the Bitcoin prices in a simple way? In the other words, could these parameters be the Bitcoin’s Granger causality? For this purpose, the proposed Granger method which has described in detail before, has been used.

The most commonly and generally used Granger causality test in economic books and software is known as the Granger test, under a self-explanatory two-way vector model.

Model Estimations

For the purpose of examining the used model, the Granger, logarithms are preceded by a time series of data series. Herein, a single root test is also performed.

Investigation of Stability

As mentioned, the logarithm of the data series is used to achieve optimal results.One of the most commonly used tests to detect stability is the single root test. The single root test is based on this logic. Y using this test and results become ρ = 1, it means that the system is not stable.

In statistics, the Dickey-Fuller test examines the null hypothesis that a unit root is present in an autoregressive model. The alternative hypothesis is varied depending on which version of the test is used however is normally stationarity or trend-stationarity.

A simple AR (1) model is

Where yᵼ is the variable of interest, t is the time index, ρ is a coefficient, and uᵻ is the error term. A unit root is present if ρ=1. The model would be non-stationary in this case.

The regression model can be written as

Where ∆ is the primary difference operator? This model can be evaluated and testing for a unit root is equivalent to testing ∂=0 (where ∂ ρ≡1). As the test is done above the residual term, rather than raw data, it is not likely to work standard t-distribution to give important values. Hence, this statistic t has a specific distribution simply known as the Dickey-Fuller table. There are three main versions of the test:

  1. Test for a unit root:
  2. Test for a unit root with drift:
  3. Test for a unit root with drift and deterministic time trend:

Any version of the test has its outstanding value, which depends on the size of the sample. In each case, the null hypothesis is that there is a unit root, ∂=0. The tests have weak statistical power in that they frequently cannot discriminate between exact unit-root processes (∂=0) and near unit-root operations (∂ is close to zero). This is named the ‘near observation equivalence’ problem.

The foreknowledge after the test is as follows. If the series y is stationary (or trend stationary), then it leads to respond to a constant (or deterministically trending) means. Hence, large values will lead to being followed by smaller values (negative changes), and small values by larger values (positive changes). Therefore, the level of the series will be a vital predictor of subsequent period’s change and will have a negative coefficient. If under other conditions, the series is integrated, then positive changes and negative changes will happen with probabilities that do not depend on the current level of the series; in a casual position, where you now do not influence which way you will move next.

It is notable that may be rewritten as

With a deterministic trend coming from {displaystyle a_{0}t} and a stochastic intercept term coming from ,resulting in what is referred to as a stochastic trend.

As it shown in Table 1, stable investigation of time series modeling test is illustrating by the modified Dickey Fuller test. In this test, the model has a fixed element, but as Dickey and Fuller advised, it lacks on timeline.

In the modified Dickey Fuller test, the adjusted h˳: α = 0 and h1: α Table 1. Stable Investigation Of Time Series Modeling Test Through The Modified Dickey Fuller Test

EUR t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -12.35323 0.0000

Test critical values: 1% level -4.019151

5% level -3.439461

10% level -3.144113

BTC t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -13.20570 0.0000

Test critical values: 1% level -4.018748

5% level -3.439267

10% level -3.143999

GOLD t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -12.84728 0.0000

Test critical values: 1% level -4.019151

5% level -3.439461

10% level -3.144113

OIL t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -11.86329 0.0000

Test critical values: 1% level -4.019151

5% level -3.439461

10% level -3.144113

*MacKinnon (1996) one-sided p-values.

The accumulation of Johansen Julius

The the previous section results show that the data in the period under review are I(1) .Therefore, the investigation of the vector error correction model will be appropriate for studying the causal relationship between the variables. Therefore, the sequence of series should be considered firstly.

The Johnson Julius method has been used to find the maximum number of specific values and also the effect test has been used.

In statistics, the Johansen test, (1) called after Søren Johansen, is a method for testing cointegration of several, say k, I(1) time series. (2) This test allows more than one cointegrating relation, therefore, is more commonly suitable than the Engle-Granger test which is based on the Dickey-Fuller (or the augmented) test for unit roots in the residuals from a single (estimated) cointegrating relation.

There are two kinds of Johansen test, either with trace or with eigenvalue, and the inferences might be a tiny bit various. The null hypothesis for the trace test is that the number of cointegration vectors is r = r* < k, vs. the alternative that r = k. Testing proceeds sequentially for r* = 1, 2, etc. and the first non-rejection of the null is taken as an estimate of r. The null hypothesis for the 'maximum eigenvalue' test is as for the trace test, though the alternative is r = r* + 1 and, repeat, testing proceeds sequentially for r* = 1, 2, etc., with the first non-rejection used as an estimator for r.

Just like a unit root test, there can be a constant term, a trend term, both, or neither in the model.(3)

For a general VAR (p) model:

In the test of investigation maximum value of the special case, the hypothesis of zero, the existence of an accumulated relation versus the existence of an interconnected relationship and one or less of an aggregate relationship vs. two interconnected relationships and … is investigated. The test of the effect, in the order of the hypothesis, is the accumulated relationship between the course and the existence of more than two relationships, and is … tested. If the test statistic for variables exceeds the critical values at a level of 5%, then the opposite assumption will be accepted and, based on this, the number of accumulated vectors will be obtained.

The test of the effect, in the order of the hypothesis, the accumulated relationship between the course and the existence of more than two relationships, is … investigated. If the test statistic for variables exceeds the critical values more than 5%, the opposite assumption will be accepted and then the number of accumulated vectors obtained.

Table 3 shows the results of accumulation. As it is shown, the 5% significance level of the effect test and the maximum value for R greater than zero which is greater than the corresponding critical value and, accordingly, the number of vectors of the coagulation of a vector is identified. And also lag exclusion test for recognizing the lag has been investigated and results has reported in Table 4.

Table 2. Johansson Cointegratin Test

Unrestricted Cointegration Rank Test (Trace)

Hypothesized Trace 0.05

No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None * 0.188973 60.52780 47.85613 0.0021

At most 1 0.122741 29.10974 29.79707 0.0599

At most 2 0.049167 9.466739 15.49471 0.3240

At most 3 0.012615 1.904224 3.841466 0.1676

Unrestricted Cointegration Rank Test (Maximum Eigenvalue)

Hypothesized Max-Eigen 0.05

No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None * 0.188973 31.41806 27.58434 0.0153

At most 1 0.122741 19.64300 21.13162 0.0797

At most 2 0.049167 7.562515 14.26460 0.4249

At most 3 0.012615 1.904224 3.841466 0.1676

Table 3. Lag Exclusion Test For Determination Of Lag

Lag LogL LR FPE AIC SC HQ

1 1247.198 NA 6.24e-13* -16.75099* -16.42550* -16.61874*

2 1254.641 14.07750 7.01e-13 -16.63458 -15.98360 -16.37008

3 1265.039 19.09844 7.58e-13 -16.55836 -15.58189 -16.16161

4 1279.586 25.92716 7.74e-13 -16.53859 -15.23663 -16.00959

5 1291.871 21.22686 8.17e-13 -16.48804 -14.86060 -15.82679

6 1308.727 28.20802* 8.12e-13 -16.49969 -14.54675 -15.70619

7 1313.694 8.041423 9.50e-13 -16.34958 -14.07115 -15.42383

8 1325.564 18.57174 1.01e-12 -16.29338 -13.68947 -15.23539

* indicates lag order selected by the criterion

Determine the pattern accumulation

Before estimating, it is necessary to specify the length of the interruptions entered in the model in order to ensure that the error sentences related to the vector error correction pattern are classical characteristics and have no consecutive correlation, with a normal distribution and a mean of zero and variance x 2, independent of each other. There are several criteria to determine the optimal interruption. The results from estimation model indicate that most criteria accept the first deduction.

Granger causality test and relationship of variables

Before regressing the variables in order to ensure the relationship between the variables in the research model, Granger causality test has been used in this section.

Granger described the causality connection based on two sources.

  1. The reason happens before its effect.
  2. The reason has different information regarding the future values of its impact.

Given these two theories about causality, Granger introduced to test the next hypothesis for the description of a causal effect of X on Y.

Where P refers to probability, A is an arbitrary non-empty set, and I(t) and I(t) ˍᵪ(t) respectively denote the data accessible as of time t in the entire universe, and that in the altered universe in which X is prohibited. If the over hypothesis is admitted, we say that X Granger-causes Y.

A self-explanatory model has been used to study the causality relation to predict variables. The hypothesis is zero-value of the coefficients with the causation variable interruption and the test function which is used here is called Wald, because we are faced with a system of equations. It is notable to say that this test has an x² distribution. If the x² value is in the critical area or the prob is less than 5%, then the h˳ hypothesis completely failed. In table 5 results belong to the VECM GRANGER Causality test are reported.

Table 4. VECM Granger Causality Test

Dependent variable: D(LOG(PBTC))

Excluded Chi-sq df Prob.

D(LOG(PGOLD)) 0.014237 1 0.9050

D(LOG(PEUR)) 1.949913 1 0.1626

D(LOG(POIL)) 1.226703 1 0.0468

All 3.676134 3 0.2986

Dependent variable: D(LOG(PGOLD))

Excluded Chi-sq df Prob.

D(LOG(PBTC)) 0.138204 1 0.7101

D(LOG(PEUR)) 0.132976 1 0.7154

D(LOG(POIL)) 1.288882 1 0.0403

All 2.581629 3 0.4607

Dependent variable: D(LOG(PEUR))

Excluded Chi-sq df Prob.

D(LOG(PBTC)) 0.805988 1 0.3693

D(LOG(PGOLD)) 0.056448 1 0.8122

D(LOG(POIL)) 0.253634 1 0.6145

All 1.196848 3 0.7538

Dependent variable: D(LOG(POIL))

Excluded Chi-sq df Prob.

D(LOG(PBTC)) 0.004361 1 0.9473

D(LOG(PGOLD)) 0.674919 1 0.4113

D(LOG(PEUR)) 1.238831 1 0.2657

All 3.230491 3 0.3574

As the test results shows in the table, the test is interrupted with one, and the zero assumption is the absence of the effect of each independent variable on the dependent variable. If prob is lower than 0.05, the assumption is zero and the result of the effectiveness of the variable Independent is on the dependent variable, as it observed, it is considered as an dependent variable, the prob probability of the oil variable is less than 0.05, which indicates the effect of the oil price on the bitcoin price.

Herein, in this study, to investigate the whether oil, Gold and Euro can have effect on the Bitcoin currency, we decided to use Granger causality and modified Dickey Fuller Test. So the impact of the oil, Euro and also Gold has been evaluated from 2016 up to 2019.

From results of testing and also prob probability of variables, it can be concluded Bitcoin only affects the oil variable and it does not affect by any of the other variables like as Euro and Gold.

image

We use cookies to give you the best experience possible. By continuing we’ll assume you board with our cookie policy.