Portfolio risk management and algorithmic trading are active research areas and have received extensive attention and interest. The former is attached of great importance after the 2008 financial crisis and the latter one because more and more trades are executed automatically via computer programs. The main purpose of this dissertation is to explore some specific problems raised in financial engineering related to portfolio risk management and algorithmic trading, as well as some quantitative tools, such as linear support vector machine, via optimization techniques, e.g., convex optimization, robust optimization, etc. Recently, a new paradigm of parity portfolio, which aims at distributing the risk among the assets in the portfolio, has been receiving a significant attention, especially after the 2008 financial crisis. For the moment, most of the existing specific problem formulations on risk parity portfolio are highly nonconvex and are solved via standard off-the-shelf numerical optimization methods, e.g., sequential quadratic programming and interior point methods. However, for the nonconvex risk parity formulations, such standard numerical approaches may be highly inefficient and not result in satisfactory solutions. To deal with these issues, we first propose a general problem formulation that can fit most of the existing specific risk parity formulations, and then propose a family of simple and efficient successive convex optimization methods for the general formulation. The numerical results show that our proposed methods outperform the existing ones by orders of magnitude. Another observation on the risk parity portfolio is that, it always results in a portfolio with nonzero weights in all the assets since each assets tends to have the same risk contribution. Investors, however, could not lay out the capital among all the assets listed on the markets, which results in an unrealistically high transaction cost and thus a significant reduction of the return of the designed portfolio. A more practical way is to select only some of the assets and distribute the capital among the selected assets with risk being diversified enough. Motivated by this idea, we consider portfolio selection and risk diversification jointly via adding sparsity and risk parity regularizations in the portfolio problem formulations and then propose an efficient sequential algorithm based on the successive convex optimization techniques. Once a portfolio has been designed, it needs to be built via executing corresponding orders. Order execution for algorithmic trading has been studied in the literature to determine the optimal strategy by minimizing a trade-off between expected execution cost and risk. Usually, the variance of the execution cost is taken as a proxy of risk due to mathematical tractability. However, it is well known that variance is not an appropriate risk measure when dealing with financial returns from non-normal, negatively skewed, and leptokurtic distributions. Here we propose the use of the conditional value-at-risk (CVaR) of the execution cost as risk measure, which allows taking into consideration only the unfavorable part of the return distribution, or, equivalently, unwanted high-cost events. In addition, due to the parameter estimation errors in the price model, the naive strategies given by the nominal problem may perform badly in real market, and hence it is extremely important to take such parameters estimation errors into consideration. To deal with this, we extend both the traditional mean-variance approach and our proposed CVaR approach to their robust design counterparts. Finally, another efficient quantitative tool which is powerful for various prediction and classification problems, including problems in financial engineering, is the support vector machine (SVM). In practice, to detect the relevant features and to avoid the features with larger dynamic range dominating those with smaller ones and the numerical difficulties during the calculation, it is necessary to do some transformations, like feature standardization and feature scaling. However, in general such transformations are done independently from SVM and it is not clear how to do them optimally in a joint way. To overcome this drawback, we start with the standard linear SVM formulation and extend it by proposing a more general linear SVM that incorporates a general transformation, including feature scaling, feature normalization, feature rotation prior to scaling, etc, and training data information into the problem formulation jointly. This results in a unifying convex framework that allows many different variations in the formulation with very diverse numerical performance. The obtained unified framework can capture the existing works as special cases, provide us with more insights on different SVMs from the “energy” and “penalty” point of views, and enable us to propose more SVMs that outperform the existing ones under some scenarios.
| Date of Award | 2015 |
|---|
| Original language | English |
|---|
| Awarding Institution | - The Hong Kong University of Science and Technology
|
|---|