# Multivariate robust estimation of DCC-GARCH volatility model

TABLE OF CONTENTS List of Tables.............................................................vii List of Figures............................................................viii Chapter 1 Introduction..................................................1 1.1 ARCH/GARCHVolatility Modeling......................................2 1.2 Outliers and Robust Estimation........................................5 1.3 Summary........................................................10 Chapter 2 Literature Review...............................................11 2.1 ARCH/GARCHModels..............................................11 2.1.1 Extensions to Univariate ARCH/GARCHModels......................13 2.1.2 Multivariate ARCH/GARCHModels...............................17 2.1.3 Conditional Correlation Approach................................19 2.2 Generalized Robust Estimation........................................21 2.2.1 Break-down Point Approach.....................................23 2.2.2 Deviance Robust Estimation.....................................25 2.2.3 Robust Estimation of Time Series Data.............................27 2.2.4 Multivariate Robust Estimation...................................34 2.3 Robust Estimation of ARCH/GARCH....................................41 2.4 Symmetry of Elliptical Distributions.....................................48 Chapter 3 Robust Estimationof the DCC-GARCHModel..........................53 3.1 Outlier in General DCC-GARCH........................................53 3.2 Robust Estimation of DCC-GARCH......................................57 3.2.1 Proposed Robust Method.......................................57 3.2.2 Consistency.................................................59 3.3 Initial Tests of Robust Method.........................................60 Chapter 4 Data DrivenEvaluation..........................................70 4.1 Multivariate Characterization of Data....................................70 4.2 Data Driven Exploration.............................................76 4.2.1 Data Driven Simulation and Results...............................76 4.2.2 Application to Foreign Exchange Rates.............................80 Chapter 5 Conclusion....................................................88 5.1 Summary........................................................88 5.2 Conclusion.......................................................89 5.3 Suggestions for Future Research........................................90 References..............................................................91 v

Appendices..............................................................95 Appendix A DCC-GARCHEstimation and Results.............................96 A.1 DCC-GARCHLikelihood Function..................................96 A.2 Complete Initial DCC-GARCHSimulations............................96 Appendix B Data Driven Data Generating Process.............................103 B.1 Densities of Squared Radii........................................103 B.2 Volatility Estimation of Foreign Exchange Rates........................103 B.3 Correlation Estimation of Foreign Exchange Rates......................103 vi

LIST OF TABLES Table 3.1 R 0 Results with First Initial Correlation Matrix.........................55 Table 3.2 Results with First Initial Correlation Matrix...........................57 Table 3.3 R 0 Results with Second Initial Correlation Matrix.......................57 Table 3.4 Results with Second Initial Correlation Matrix.........................58 Table 3.5 Comparing Two Robust Methods with First Correlation Structure...........61 Table 3.6 Comparing Two Robust Methods with Second Correlation Structure........62 Table 3.7 Results Across Generating Processes with First Set......................65 Table 3.8 Results Across Generating Processes with Second Set...................69 Table 4.1 Exchange Rate Volatility Parameter Estimation........................71 Table 4.2 Comparing R 2 Values of Q-QPlots.................................74 Table 4.3 R 2 Combinations for Contaminated Normal..........................74 Table 4.4 Set 1 Results for Data Driven Comparison of Methods...................78 Table 4.5 Set 2 Results for Data Driven Comparison of Methods...................79 Table 4.6 Volatility Parameters Between Robust and Traditional Method.............80 Table 4.7 Correlation Parameters Between Robust and Traditional Method...........81 Table A.1 Complete MSE Results Across Generating Processes with First Portfolio......97 Table A.2 Complete Bias Results Across Generating Processes with First Portfolio......98 Table A.3 Complete Variance Results Across Generating Processes with First Portfolio...99 Table A.4 Complete Results Across Generating Processes with Second Portfolio........100 Table A.5 Complete Bias Results Across Generating Processes with Second Portfolio....101 Table A.6 Complete Variance Results Across Generating Processes with Second Portfolio.102 vii

LIST OF FIGURES Figure 1.1 Likelihoods of α (a) and β (b) without Split Likelihood Estimation.........5 Figure 1.2 Likelihoods of α (a) and β (b) with Split Likelihood Estimation............6 Figure 4.1 QQ-plot of Spherical Symmetry...................................72 Figure 4.2 Q-QPlots of Squared Radii with χ 2 1.36 (a) and F 2.75,3.48 (b)................75 Figure 4.3 Overlayed Estimated Volatility Time Plots...........................82 Figure 4.4 Estimated Volatility Time Plots with [a] MLE and [b] Robust Estimation.....83 Figure 4.5 Overlayed Estimated Volatility Time Plots...........................84 Figure 4.6 Time Plots of Determinant of Estimated Correlation Matrices.............86 Figure 4.7 Time Plots of Average Correlation Between Assets.....................87 Figure B.1 Density Plots of Radii (a) and Squared Radii (b).......................104 Figure B.2 Estimated Volatility Time Plots of Euro (MLE [a],Robust [b]).............105 Figure B.3 Estimated Volatility Time Plots of Pound (MLE [a],Robust [b])............106 Figure B.4 Estimated Volatility Time Plots of Franc (MLE [a],Robust [b]).............107 Figure B.5 Estimated Volatility Time Plots of Yen (MLE [a],Robust [b])..............108 Figure B.6 Estimated Volatility Time Plots of Dollar (MLE [a],Robust [b])............109 Figure B.7 Estimated Correlation Time Plots.................................110 Figure B.8 Estimated Correlation Time Plots.................................111 Figure B.9 Estimated Correlation Time Plots.................................112 Figure B.10 Estimated Correlation Time Plots.................................113 Figure B.11 Estimated Correlation Time Plots.................................114 viii

CHAPTER 1 Introduction Volatility modeling plays a critical role in mathematical ﬁnance and statistical applications.The ability to estimate and forecast volatilities for different assets and groups of assets leads to a better understanding of current and future ﬁnancial risk.Many different methods of volatility estimation have been developed over the past fewdecades.Understanding the characteristics of ﬁnancial assets helps develop estimation methods for volatilities.Studying volatilities of ﬁnancial assets reveals that volatility seems to vary over time instead of remaining constant.The volatilities also exhibit some persistence,or dependence over time,with a clustering effect of small (or large) number of returns being followed by small (or large) number of returns of either sign. Let r t be the daily return of a ﬁnancial asset,modeled by r t =

h t

t and t as the random error with the variance of 1.These daily returns are deﬁned as the logarithmof the relative prices log(P t /P t −1 ) with P t as the current time period price in dollars.It is reasonable to assume daily returns have a conditional mean of approximately zero.This assumption is reasonable because an extremely high annual return of 25%translates to a daily return of only 0.09%.Time weighted estimates are a reasonable initial guess of volatility estimation to account for persistence: h t = n

i =1 ω i r 2 t −i with

n i =1 ω i = 1 and the more recent observations more heavily weighted.A problemwith this approach is that many different weights ω i need deﬁning.An exponential weight scheme where ω i =λω i −1 ,with λ between 0 and 1,potentially solves the problem.Other estimates have also been proposed. 1

1.1 ARCH/GARCHVolatility Modeling Instead of a weighting scheme,Engle (1982) uses an autoregressive time series approach to account for persistence in volatility estimation.He also assumes the conditional variance,or volatility,of returns varies over time.Engle’s autoregressive conditional heteroscedasticity (ARCH) model deﬁnes the volatility as h t =a 0 +a 1 r 2 t −1 ,(1.1) with a 0 >0 and a 1 ≥0 for the volatility to remain positive.Engle assumes the returns,r t given ψ t −1 , followa normal distribution:r t |ψ t −1 ∼N(0,h t ),where ψ t −1 represents all information up to time t −1.This model easily extends to p lags of returns with the ARCH(p) model h t =a 0 + p

i =1 a i r 2 t −i ,(1.2) with a 0 >0 and a i ≥0 for all i.However,in practice,we often need large values of p to accurately model real world data. Bollerslev (1986) avoids the problemof large values of p in Engle’s ARCHmodel by generalizing the ARCH(p) model into the generalized autoregressive conditional heteroscedasticity (GARCH) model,in much the same way as an autoregressive (AR) model extends to the autoregressive moving average (ARMA) model.The GARCHmodel allows a longer memory process with more ﬂexibility.The GARCH(p,q) model still assumes normality with r t |ψ t −1 ∼N(0,h t ),but instead of equation 1.2,h t is deﬁned as h t =a 0 + p

i =1 a i r 2 t −i + q

i =1 b i h t −i ,(1.3) witha 0 >0,a i ≥0,andb i ≥0.Inmany cases,p =q =1 is foundto give anadequate ﬁt.The univariate ARCH/GARCHframework of models has been adapted into many different forms,which are detailed in Section 2.1.1. The ARCH/GARCHframework of models also extends into a multivariate context,to model the underlying volatilities and correlations between different market assets.The general multivariate extension to the GARCHmodel has a vector of assets as a stochastic process r t of k ×1 dimension deﬁned as r t =H 1/2 t

t ,(1.4) where H 1/2 t is a factor of the conditional variance-covariance matrix of size k ×k,and with Var( t ) =I k . Bollerslev et al.(1988) model H t as vech(H t ) =c +Avech( t

t ) +Bvech(H t ),(1.5) 2

where vech(·) is the operator that is a column-wise vectorization of the of the lower triangular portion of a matrix,and the matrices A and B are parameter matrices.This speciﬁcation of the H t matrix is referred to as the VECmodel.The number of parameters in this model grows very quickly as the number of assets in the model grows.To make parameter estimation feasible,Bollerslev,Engle,and Wooldridge proposed to restrict A and Bto diagonal matrices.Models with other speciﬁcations of H t , such as the BEKK(1,1,K) and Factor GARCH,are described in Section 2.1.2.Most of these approaches involve many parameters to be estimated,which leads to computational burdens for large portfolios of assets. Aless computationally burdensome approach to multivariate GARCHestimationis a combination of univariate estimation of GARCHmodels and estimation of multivariate correlation matrices.This greatly reduces the number of parameters by separately deﬁning individual conditional variance struc- tures and an overall correlation structure.Bollerslev (1990) designed one of these approaches with the constant conditional correlation GARCH(CCC-GARCH).The CCC-GARCHdeﬁnes the conditional covariance matrix of returns as H t =D t RD t ,D t =diag(

h i,t ),(1.6) where R is a correlation matrix containing conditional correlations,and h i,t follows the univariate GARCHmodel,deﬁned as h i,t =a i,0 + P i

p=1 a i,p r 2 i,t −p + Q i

q=1 b i,q h i,t −q .(1.7) The conventional sample correlation matrix is a reasonable estimate of R.However,in practice the assumption that correlations of assets remain constant over time seems unrealistic.In particular,the constant correlations assumption understates risk if correlations increase in turbulent markets. Engle (2002) relaxes the assumptionof constant correlationinthe dynamic conditional correlation GARCH(DCC-GARCH) by allowing the correlation matrix to change over time.This model is widely used for its combination of computational ease as well as the evolving correlation matrix deﬁned by H t =D t R t D t .(1.8) Engle mentions two different estimates for the R t matrix.The ﬁrst speciﬁcation involves exponential smoothing with Q t =(1−λ)

t −1

t −1

+λQ t −1 ,(1.9) where Q t is the positive deﬁnite covariance matrix and R t = diag(Q t ) −1/2 Q t diag (Q t ) −1/2 .Another 3

method uses a GARCH(1,1) model as a speciﬁcation with Q t =R 0 (1−α−β) +α( t −1

t −1 ) +βQ t −1 ,(1.10) with R 0 as the unconditional correlation matrix and α+β <1. This leads to the following speciﬁcation of the DCC-GARCHmodel: r t |ψ t −1 ∼N(0,D t R t D t ), D 2 t =diag(a 0,i ) +diag(a 1,i ) ◦r t −1 r

t −1 +diag(b 1,i ) ◦D 2 t −1 ,

t =D −1 t r t , Q t =R 0 (1−α−β) +α( t −1

t −1 ) +βQ t −1 , R t =diag(Q t ) −1/2 Q t diag(Q t ) −1/2 , (1.11) where ◦ represents the elementwise product of the matrices.The log likelihood we would maximize to estimate the parameters of the model is L =− 1 2 T

t =1

nlog(2π) +2log|D t | +r

t D −1 t D −1 t r t −

t

t +log|R t | +

t R −1 t

t

,(1.12) as shown in detail in Appendix A.Maximizing this function over the parameters leads to the maximum likelihood estimates (MLE) of the parameters.Engle suggests splitting the likelihood into the sum of two parts to improve efﬁciency in calculating the model.The two components are the volatility component,which only depends on the individual GARCHparameters,and the correlation compo- nent,which depends on both the correlation parameters and the individual GARCHparameters.Let θ denote the volatility parameters in the D matrix and φ denote the correlation parameters in the R matrix.The split is written L(θ,φ) =L V (θ) +L C (θ,φ),(1.13) with the volatility part as L V (θ) =− 1 2 T

t =1

nlog(2π) +2log|D t | +r

t D −1 t D −1 t r t

,(1.14) and the correlation part as L C (θ,φ) =− 1 2 T

t =1

log|R t | +

t R −1 t

t −

t

t

.(1.15) Engle ﬁrst estimates the volatility parameters with ML estimation.He then places the estimates into the correlation portion of the likelihood to estimate the correlation parameters with ML estimation. 4

[a] ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.10 0.15 0.20 0.25 0.30 −2400 −2380 −2360 −2340 −2320 −2300 −2280 Alpha −Likelihood [b] ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.4 0.5 0.6 0.7 0.8 −2270 −2260 −2250 −2240 −2230 −2220 −2210 Beta −Likelihood Figure 1.1:Likelihoods of α (a) and β (b) without Split Likelihood Estimation After comparing estimates withbothwhole MLestimationandMLestimationacross parts,the lack of efﬁciency mentioned above arises fromproblems in the estimation of the correlation parameters. A set of three assets is evaluated with a DCC-GARCHmodel with correlation parameters α=0.24 and β =0.7 over 500 periods in time with an initial correlation matrix R 0 deﬁned as 1 0.85 0.85 0.85 1 0.85 0.85 0.85 1 . A likelihood function of both the α and β parameters shows instability as shown in Figure 1.1.The graph of the likelihood functions above showthe value of the likelihood function for changing values of a single parameter as the other parameters in the model are held constant. The instability in the estimation of these parameters poses a problemin trying to derive con- clusions about the model.The likelihoods for the correlation parameters showmore stability when imposing the technique of breaking the likelihood into two pieces,as seen in Figure 1.2.The immense improvement in the stability of the estimation of these parameters allows for better estimation of the model parameters.With the split estimation technique,we can better examine the effects of outliers in the maximumlikelihood estimation of the DCC-GARCH. 1.2 Outliers and Robust Estimation Observations that deviate fromthe general pattern of the data,called outliers,affect the accuracy of standard techniques of analysis and estimation.Outliers reduce the ability of classic techniques 5

[a] ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.10 0.15 0.20 0.25 0.30 345 350 355 360 365 370 Alpha −final [b] ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.4 0.5 0.6 0.7 0.8 362 364 366 368 370 Beta final Figure 1.2:Likelihoods of α (a) and β (b) with Split Likelihood Estimation such as sample mean,sample variance,sample correlation,and regression modeling to estimate parameters in the data.Many different robust estimation techniques have been used to estimate models fromdata when outliers are or are not present.Good robust estimates provide accurate estimation of parameters in the presence or absence of outliers. Outliers also affect maximumlikelihood estimation (MLE).This is a common method of model pa- rameter estimationusedinestimating the parameters inthe DCC-GARCHmodel.Deﬁne L(β|y i ,...,y n ) as the log of the likelihood function for the randomvariable y i .We derive the MLE by solving either ˆ β MLE =argmax β L(β|y i ,...,y n ), or taking the derivative of the log of the likelihood function and solving n

i =1 (β|y i ) =0,(1.16) where (·) is the derivative of the log of the likelihood function.Under certain regularity conditions, ˆ β MLE is both consistent and asymptotically normally distributed.These maximumlikelihood esti- mators are a speciﬁc subset of a general class of estimators called M-estimators developed by Huber (1964).These estimates, ˆ β ψ ,are solutions to n

i =1 ψ(y i ,β) =0,(1.17) insteadof equation1.16.Assume the functionis unbiased—deﬁnedas E β

ψ(y i ,β)

=0.Then,under 6

certain regularity conditions deﬁned by Huber (1973), ˆ β is consistent and asymptotically normal. Different choices of the ψfunction lead to robust estimation of the parameters.These choices are discussed further in Section 2.2. It is beneﬁcial to understand outliers in time dependent data because the DCC-GARCHmodel uses an autoregressive structure in estimating the volatilities and correlations.Time series data,such as ﬁnancial data,contain greater potential for outliers hindering the estimation process,because of the underlying dependence between observations in the data.In time series data,outliers may occur in patches throughout the data,or in isolation throughout the data.In some cases,entire shifts of the process may occur.A couple of common outliers that may occur in time series data are additive outliers (AO) and innovation outliers (IO). Maronna et al.(2006) describes additive outliers as outliers where instead of the expected observa- tion y t ,it is replaced by y t +υ t where υ t ∼(1−ε)δ 0 +εN(µ υ ,σ 2 υ ).In this deﬁnition,δ 0 is a point mass distribution at zero and the σ 2 υ is signiﬁcantly greater than the variance of y t .This creates an outlier with probability ε,and n consecutive outliers with probability ε n .They deﬁned innovation outliers as outliers that affect both the current and subsequent observations.These outliers are especially relevant in autoregressive (AR) and autoregressive moving average (ARMA) models. Innovation outliers occur in the error termof the model.In this case,the observation y t is actually affected,as shown with a simple AR(1) model: y t =φy t −1 +ε t . If ε t comes fromeither a distribution with larger tails than a normal distribution or a mixture of two normal distributions,then y t becomes an outlier that directly affects y t +1 by y t +1 =φ ¨ y t +ε t , where ¨ y t is the value of y t where ε t −1 is not froma normal distribution.Both additive and innovation outliers potentially bias the estimate along with changing the estimate’s variability. Many different methods have been proposed to handle outliers in time series data,such as robust ﬁlter estimation for AR and ARMA models or M-estimation techniques both of which are described in Maronna et al.(2006).These are just some of the different possible approaches,and more are discussed in detail in Section 2.2.3.A robust ﬁlter approach replaces prediction residuals ˆ ε t with robust prediction residuals ˜ ε t by replacing outliers by robust ﬁltered values.Instead of the typical residual deﬁnition in an AR(p) model ˆ ε t =(y t −µ) −φ 1 (y t −1 −µ) −...−φ p (y t −p −µ), 7

the newrobust residuals are deﬁned by ˜ ε t =(y t −µ) −φ 1 ( ˜ y t −1|t −1 −µ) −...−φ p ( ˜ y t −p|t −1 −µ), where the ˜ y i are a ﬁltered prediction of the observation,which are approximations to the expected value of that observation.The value ˜ y t is equal to y t if the value does not fall outside a predetermined range fromthe expected value at that point.If ˜ y t falls outside the range,then the estimate is equal to an approximation of the expected value at that point,given previous information.This robust ﬁltering approach extends into the ARMA class of models as well.Although this estimation works well for AO, the process does not work as well in the presence of IO. M-estimation techniques for ARMA models minimize T

t =p+1 ρ

ˆ ε t (β) ˆ σ ε

, where the ψappears inequation1.17 may be the derivative of the ρfunctionappearing here, ˆ ε t are the model residuals,and ˆ σis a robust estimate of the standard deviation of the residuals.As mentioned before,the M-estimation may be implemented using various ψfunctions,such as Yohai (1987) MM- estimate,to help limit effects of outliers.Yohai uses three steps for MM-estimation,where he ﬁrst computes an initial estimate of ˆ β .Fromthis estimate,he computes a robust scale estimate, ˆ σ for the residuals.He then uses an iterative process to continue the previous two steps until convergence. These estimates are relatively robust for contamination with AO,but lose their effectiveness as the order p of the AR(p) process increases.However,asymptotic theory for M-estimates is based on the assumption that the errors in the model are homoscedastic.The DCC-GARCH model is heteroscedastic by construction.Although some M-estimates do not depend on homoscedastic errors,they have lower efﬁciency than those that account for the lack of homoscedasticity. An improved M-estimator without homoscedasticity takes into account the other covariates and possible parameters making the errors heteroscedastic by y t =β

x t +g (ξ,β

x t )ε t , where ξ is a parameter vector limited to the error variance.To obtain robust estimates of both sets of parameters,Maronna et al.(2006) suggest computing an initial estimate of the parameter vector β by the proposed above MM-estimate.The residuals of this model are then calculated and used in the computation of an estimate of the parameter vector ξ.Fromhere,robust transformations of the original y t and x t are calculated by dividing through by the estimated g (·) function,to produce a more accurate estimate of β.The process continues in iterations until reasonable estimates are obtained. The heteroscedastic errors affect not only univariate estimation,as in the case of the GARCH 8

parameters in the DCC-GARCH,but also the multivariate estimation of the correlation structure between variables.The DCC-GARCHmodel requires the estimation of a covariance matrix to describe the relationship between the multiple assets in the portfolio.White (1980) noted that heteroscedastic- ity not only hinders linear model parameter estimation,but also hinders covariance matrix estimation. He proposes an estimate of the covariance matrix that is not unduly affected by the presence of het- eroscedasticity and does not require a speciﬁc model of the heteroscedasticity.He assumes that the errors in the model have heteroscedasticity of the formE(ε 2 t |x t ) =g (x t ).Under some basic moment assumptions of the errors in the model,White develops the estimator ˆ V n = 1 n n

t =1 ˆ ε 2 t,MLE x

t x t , where ˆ ε t,MLE are the residuals evaluated with the parameters at the MLE values.Using the previous estimator,the heteroscedasticity-robust covariance matrix is ˆ Σ R =

x

t x t n

−1 ˆ V n

x

t x t n

−1 .(1.18) Outliers also affect covariance matrix estimation.Some proposed robust multivariate estimates of the covariance matrix are computationally burdensome in high dimensional data,such as some ﬁnancial data.Robust estimation of location and scale using Mahalanobis distances computed from M-estimators are computationally difﬁcult,according to Peña and Prieto (2001).They state that the minimumcovariance determinant (MCD) by Rousseeuw(1984) is also computationally intensive.The purpose of the MCDmethod is to take h observations fromthe total that have the lowest determinant of the covariance matrix.The MCDestimate of the covariance matrix is just a multiple of these points’ covariance matrix.For this process to work,many iterations of resampling must take place,which lead Rousseeuwand Van Driessen (1999) to create the FAST-MCDalgorithm,explained in full detail in Hubert et al.(2008). Peña and Prieto again suggest that even the FAST-MCDalgorithmrequires too much resampling and reduces heavy computation time with needed approximations.They suggest that outliers in multivariate data created by a symmetric contamination tend to increase the kurtosis coefﬁcient.The directions of the projected observations,based on kurtosis coefﬁcients,lead to a better idea of which directions contain outliers.They create an algorithmbased on these projected kurtosis directions. The details of the algorithmare contained in Peña and Prieto (2001).Other multivariate estimators with outliers are discussed further in Section 2.2.4. 9

1.3 Summary The above conclusions about the effects of outliers in autoregressive models,models with het- eroscedasticity,and covariance matrix estimation,showthe DCC-GARCHmodel is inherently hin- dered by outliers.This thesis proposes a robust estimation method for the DCC-GARCHthat accounts for outliers present in the data.The second chapter is a literature reviewof the explored papers and topics in ARCH/GARCHmodeling in both the univariate and multivariate context,robust estimation in univariate,multivariate,and time series data,previous attempts of ARCH/GARCHrobust estima- tions,and tests of symmetry for elliptical distributions.Chapter 3 proposes the robust estimation method for the DCC-GARCHmethod and shows an example of outliers hindering the DCC-GARCH model while discussing the creation and asymptotics of the newrobust estimation method.Chapter 4 discusses the attempts to identify real world data distributions for a data driven evaluation of the newly proposed model.Chapter 4 also summarizes the results of simulation studies comparing the maximumlikelihood ﬁtting of the DCC-GARCHmodel with the newly proposed robust method,and displays results of ﬁtting the robust method to foreign exchange rate data.Chapter 5 concludes with a summary of the results along with possible areas of future research in the ﬁeld. 10