好文档就是一把金锄头!
欢迎来到金锄头文库![会员中心]
电子文档交易市场
安卓APP | ios版本
电子文档交易市场
安卓APP | ios版本

大数据与AI策略:机器学习在股票衍生品中的应用.docx

65页
  • 卖家[上传人]:亦***
  • 文档编号:321785058
  • 上传时间:2022-07-04
  • 文档格式:DOCX
  • 文档大小:2.99MB
  • / 65 举报 版权申诉 马上下载
  • 文本预览
  • 下载提示
  • 常见问题
    • Global Quantitative & Derivatives Strategy 07 January 2020J. P MorganBig Data and Al StrategiesApplications of Machine Learning in Equity DerivativesGlobal Quantitative and Derivatives Strategy Peng Cheng, CFA AC (1-212)622-5036 peng.cheng@ Thomas J Murphy, PhD AC (1-212) 270-7377 thomas.x.murphy@ Marko Kolanovic, PhD (1-212) 622-3677 marko.kolanovic@ J.P. Morgan Securities LLC• In this report, we illustrate the applications of machine learning techniques to equity derivatives trading strategies. Specifically, we focus on the topics below:• A Practitioner^ Introduction to Neural Network: We aim to demystify neural network for our readers in a practitioner friendly way. The neural network architecture is explained by comparing it to the familiar linear regression model. We then move on to using real world data and examine the correspondence between neural network and existing, well known financial models for volatility forecasting. Finally LSTM is used to forecast volatility of S&P 500 and EURUSD, and its performance is compared against GARCH.• Sentiment Signals for Macro Dividend Trading: We look at the relationship between sentiment information contained in management call transcripts and the subsequent dividend futures returns. Our analysis shows that after controlling for SX5E returns, sentiment signals contain orthogonal signals on dividend futures returns. Moreover, we develop a trading strategy incorporating the sentiment signal, which is shown to offer performance improvements to long only dividend futures trading strategy.• Sentiment Factor Returns: We analyze sentiment data from the vendor Alexandria Technology between 2000 and 2019 and find that the data contains a significant short term signal. We use a factor model, which controls for traditional factors in order to produce a pure sentiment factor, and examine its risk adjusted performance, signal decay and correlation with other factors. We also evaluate methods for combining the sentiment factor with traditional style factors, using value as an example.• Cross Asset Volatility Optimal Portfolio Construction - Beyond Risk Parity: There has been increasing evidence of non-normal distribution in cross asset risk premium strategies. Popular portfolio models such as mean variance optimization and risk parity are ill-equipped to address the issue. We propose a framework for constructing cross asset portfolios that aims to address the challenge and achieves superior performance by incorporating all higher moments (skewness, kurtosis, etc.) and controlling for excess turnover. The framework is first demonstrated with simple stylized cases, followed by a more comprehensive real-world cross asset example.• Please refer to our previous volumes for additional research on similar topics: May 2018, Nov 2018, and Jim 2019.See page 60 for analyst certification and important disclosures.J.P. Morgan does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that the firm may have a conflict of interest that could affect the objectivity of this report. Investors should consider this report as only a single factor in making their investment decision.Figure 8: Define sigmoid function1 sigmoid <一 function(x{2 exp(x)/(1+exp(x))3 )Source: J.P. MorganWe now move on to the forward propagation step. The chind(l, ...) in lines 6 and 8 add the intercepts (biases). The sigmoid function is applied to the hidden node in line 7. 、Figure 9: Define forward propagation function5 fwdprop 〈一 function(xz whz wy){6 h <- cbind(lz x) %*% whT h <- sigmoid(h) ###hidden layer output <- cbind(lz h) %*% wy ####output layer return(list(output = output, h = h))10 }Source: J.P. MorganSimilar to OLS, our loss function is defined to be least squares. The init.w variable is a vector which contains all the parameters including the intercepts (biases). For now we hard code the first four values to the Wh terms and the last two variables to the 叫terms, for the sake of simplicity.Figure 10: Define loss function12 loss.fun <- function(init.w, xz y){13 wh = init.w[l:4]14 wy = init.w[5:6]15 y.hat <- fwdprop(xz wh, wy)$output16 return(sum((y - y.hat)A2))17 }Source: J.P. MorganThe code above constitutes a one layer, one neutron neural network model. It can be expanded relatively easily to accommodate multiple layers and neurons.Before moving onto the backpropagation step, we first simulate some sample data in order to train the neural network. As opposed to using actual data, we are able to specify the exact data generating process in our simulation. Lines 20 - 25 generate 500 normal random variables with mean zero and standard deviation 0.1. In line 26 we define y as a linear function of Xi, X2, X3, and some added noise. The neural network will attempt to use the sigmoid to fit to the linear relationship. This exercise 。

      点击阅读更多内容
      关于金锄头网 - 版权申诉 - 免责声明 - 诚邀英才 - 联系我们
      手机版 | 川公网安备 51140202000112号 | 经营许可证(蜀ICP备13022795号)
      ©2008-2016 by Sichuan Goldhoe Inc. All Rights Reserved.