互联网+量化投资 大数据指数手把手

来源:https://uqer.io/community/share/55263359f9f06c8f3390457b

策略简介

从公司基本面、市场驱动指标、市场情绪等多维度验证拥有“天时、地利、人和”的大牛股,让每个人都能生产符合自己投资理念的大数据指数。实现中参考了水星社区中的牛人@吴宇笛的因子计分卡策略。

本策略的参数如下:

  • 起始日期: 2014年1月1日
  • 结束日期: 2016年5月18日
  • 股票池: 上证50
  • 业绩基准: 上证50
  • 起始资金: 100000元
  • 调仓周期: 1个月 策略参数获取:

  • 十日移动均线(MA10) 60日移动均线(MA60) 资产回报率(ROA) 市盈率(PE) 对数市值(LCAP) 波幅中位数(DHILO) 净利润/营业总收入(NPToTOR) 产权比率(DebtEquityRatio) 营业利润同比增长(OperatingProfitGrowRate) 总资产同比增长(TotalAssetGrowRate) 均可以通过DataAPI.MktStockFactorsDateRangeGet获得

  • 市场新闻热度指标可以通过DataAPI.NewsHeatIndexGet获得
  • 市场情绪指标可以通过DataAPI.NewsSentimentIndexGet获得;与新闻热度指标一样,都是DataYes利用大数据分析从海量关联新闻中提取出来的

调仓策略

(1) 对每只股票获取之前的120个交易日的收盘价,计算20日累计收益,共得到100个收益率数据

(2) 获取该股票同期的100个交易日的基本面、市场驱动指标和市场热度、情绪指标,分别计算均值、标准差,并进行中心化

(3) 以该股票20日累计收益率为因变量,基本面、市场驱动指标和市场热度、情绪指标为自变量进行弹性网 ( ElasticNet ) 回归

(4) 获取该股票前一日的基本面、市场驱动指标和市场热度、情绪指标

(5) 对该股票前一日的基本面、市场驱动指标和市场热度、情绪指标,依据前100个交易日的均值和标准差,置相对大小为 (前一日值 - 均值)/ 标准差 并四舍五入,作为在该项因子上的得分

(6) 根据之前计算出的权重对这些得分进行加总,得到该股票的得分,并以此为指数进行股票筛选

(7) 根据指数得分排序,选取总分最高的前五支股票作为买入列表

(8) 根据买入列表调仓

  1. import pandas as pd
  2. import numpy as np
  3. import statsmodels.api as sm
  4. import statsmodels.regression.linear_model as lm
  5. from sklearn.linear_model import ElasticNet
  6. from CAL.PyCAL import *
  7. used_factors = ['MA10', 'MA60', 'ROA', 'PE', 'LCAP', 'DHILO', 'DebtEquityRatio', 'OperatingProfitGrowRate', 'TotalAssetGrowRate', 'NPToTOR']
  8. #used_factors = ['ASSI', 'EBITToTOR', 'ETP5', 'MA60', 'HSIGMA', 'PE', 'VOL60', 'SUE', 'DAVOL20', 'TotalAssetGrowRate']
  9. def StockFactorsGet(universe, trading_days):
  10. data_all = {}
  11. for i,stock in enumerate(universe):
  12. try:
  13. data = DataAPI.MktStockFactorsDateRangeGet(secID = stock, beginDate = trading_days[0], endDate = trading_days[-1], field = ['tradeDate'] + used_factors)
  14. # data['tradeDate'] = pd.to_datetime(data['tradeDate'])
  15. except Exception, e:
  16. print e
  17. try:
  18. news_data = DataAPI.NewsHeatIndexGet(secID = stock, beginDate = trading_days[0], endDate = trading_days[-1])
  19. heatIndex = news_data.set_index('newsPublishDate').sort_index().reset_index()[['heatIndex','newsPublishDate']]
  20. heatIndex['flag'] = heatIndex['newsPublishDate'].apply(lambda x: True if x in data.tradeDate.values else False)
  21. heatIndex = heatIndex[heatIndex.flag].reset_index()
  22. data = pd.merge(data, heatIndex, how = 'inner', left_index = 'tradeDate', right_index = 'newsPublishDate').drop(['index','newsPublishDate','flag'], 1)
  23. except Exception, e:
  24. data['heatIndex'] = 0
  25. try:
  26. emotion_data = DataAPI.NewsSentimentIndexGet(secID = stock, beginDate = trading_days[0], endDate = trading_days[-1])
  27. emotionIndex = emotion_data.set_index('newsPublishDate').sort_index().reset_index()[['sentimentIndex','newsPublishDate']]
  28. emotionIndex['flag'] = emotionIndex['newsPublishDate'].apply(lambda x: True if x in data.tradeDate.values else False)
  29. emotionIndex = emotionIndex[emotionIndex.flag].reset_index()
  30. data = pd.merge(data, emotionIndex, how = 'inner', left_index = 'tradeDate', right_index = 'newsPublishDate').drop(['index','newsPublishDate','flag'], 1)
  31. except Exception, e:
  32. # print 'emotion', stock, e
  33. data['sentimentIndex'] = 0
  34. data['news_emotion'] = data['heatIndex'] * data['sentimentIndex']
  35. data_all[stock] = data
  36. return data_all
  37. def StockRegDataGet(stock, trading_days, factors, shift = 20):
  38. start = trading_days[0]
  39. end = trading_days[-1]
  40. data = factors[(factors.tradeDate >= start.strftime('%Y-%m-%d')) & (factors.tradeDate <= end.strftime('%Y-%m-%d'))][:-shift]
  41. ret = DataAPI.MktEqudGet(secID = stock, beginDate = start.strftime('%Y%m%d'), endDate = end.strftime('%Y%m%d'), field = ['tradeDate', 'closePrice'])
  42. ret['fwdPrice'] = ret['closePrice'].shift(-shift)
  43. ret['return'] = ret['fwdPrice'] / ret['closePrice'] - 1.
  44. ret = ret[:-shift]
  45. data = data.merge(ret, how = 'inner', left_on = ['tradeDate'], right_on = ['tradeDate'])
  46. data = data.loc[:, ['return', 'heatIndex', 'sentimentIndex', 'news_emotion'] + used_factors]
  47. return data
  48. def GetRegressionResult(data):
  49. data = data.dropna()
  50. all_factors = ['heatIndex', 'sentimentIndex', 'news_emotion'] + used_factors
  51. for f in all_factors:
  52. if data[f].std() == 0:
  53. continue
  54. data[f] = (data[f] - data[f].mean()) / data[f].std()
  55. y = np.array(data['return'].tolist())
  56. x = []
  57. for f in all_factors:
  58. x.append(data[f].tolist())
  59. x = np.column_stack(tuple(x))
  60. x = np.array( [ np.append(v,1) for v in x] )
  61. en = ElasticNet(fit_intercept=True, alpha=0)
  62. en.fit(x, y)
  63. res = en.coef_[:-1]
  64. w = dict(zip(all_factors, res))
  65. return w
  66. def preparing(universe, date, factors_all):
  67. date = Date(date.year, date.month, date.day)
  68. cal = Calendar('China.SSE')
  69. start = cal.advanceDate(date, '-120B', BizDayConvention.Following)
  70. end = cal.advanceDate(date, '-1B', BizDayConvention.Following)
  71. start = datetime(start.year(), start.month(), start.dayOfMonth())
  72. end = datetime( end.year(), end.month(), end.dayOfMonth())
  73. trading_days = quartz.utils.tradingcalendar.get_trading_days(start, end)
  74. datas, means, vols, weights = {}, {}, {}, {}
  75. for i,stock in enumerate(universe):
  76. try:
  77. datas[stock] = StockRegDataGet(stock, trading_days, factors_all[stock])
  78. means[stock] = dict(datas[stock].mean())
  79. vols[stock] = dict(datas[stock].std())
  80. weights[stock] = GetRegressionResult(datas[stock])
  81. except Exception, e:
  82. pass
  83. return means, vols, weights
  1. from datetime import datetime
  2. end = datetime(2016, 5, 18)
  3. f_start = datetime(2014, 1, 1)
  4. universe = set_universe('SH50')
  5. f_days = quartz.utils.tradingcalendar.get_trading_days(f_start, end)
  6. factors_all = StockFactorsGet(universe, f_days)
  1. from datetime import datetime
  2. start = datetime(2014, 6, 1)
  3. end = datetime(2016, 5, 18)
  4. benchmark = 'SH50'
  5. universe = set_universe('SH50')
  6. capital_base = 100000
  7. refresh_rate = 20
  8. # f_start = datetime(2012, 6, 1)
  9. # f_days = quartz.utils.tradingcalendar.get_trading_days(f_start, end)
  10. # factors_all = StockFactorsGet(universe, f_days)
  11. def initialize(account):
  12. pass
  13. def handle_data(account):
  14. print account.current_date
  15. means, vols, weights = preparing(account.universe, account.current_date, factors_all)
  16. cal = Calendar('China.SSE')
  17. date = Date(account.current_date.year, account.current_date.month, account.current_date.day)
  18. date = cal.advanceDate(date, '-1B', BizDayConvention.Following)
  19. date = datetime(date.year(), date.month(), date.dayOfMonth())
  20. factors_cur = StockFactorsGet(account.universe, [date])
  21. score = {}
  22. all_factors = ['heatIndex', 'sentimentIndex', 'news_emotion'] + used_factors
  23. for stock in account.universe:
  24. if stock not in weights:
  25. continue
  26. fac = factors_cur[stock]
  27. s = 0
  28. for f in all_factors:
  29. try:
  30. x = fac[f].iloc[-1]
  31. x = (x - means[stock][f])/vols[stock][f]
  32. s += weights[stock][f] * int(round(x))
  33. except:
  34. pass
  35. score[stock] = s
  36. buylist = sorted(score.keys(), key = lambda x: score[x])[-5:]
  37. rebalance(account, buylist)
  38. def rebalance(account, buylist):
  39. for stock in account.valid_secpos:
  40. if stock not in buylist:
  41. order_to(stock, 0)
  42. for stock in buylist:
  43. order(stock, account.referencePortfolioValue / len(buylist) / account.referencePrice[stock])
  44. 2014-06-03 00:00:00
  45. 2014-07-01 00:00:00
  46. 2014-07-29 00:00:00
  47. 2014-08-26 00:00:00
  48. 2014-09-24 00:00:00
  49. 2014-10-29 00:00:00
  50. 2014-11-26 00:00:00
  51. 2014-12-24 00:00:00
  52. 2015-01-23 00:00:00
  53. 2015-02-27 00:00:00
  54. 2015-03-27 00:00:00
  55. 2015-04-27 00:00:00
  56. 2015-05-26 00:00:00
  57. 2015-06-24 00:00:00
  58. 2015-07-22 00:00:00
  59. 2015-08-19 00:00:00
  60. 2015-09-18 00:00:00
  61. 2015-10-23 00:00:00