MIT’s AIM Labs is a cohort of students working on, or “hacking”, a curiosity-driven project involving AI, ML, or similar computer technologies. I worked with a group to develop a stock prediction algorithm. Having suffered at the hands of the stock market1, I wanted to see if I could do better with systematic, algorithmic analysis of the market.


Our work revolved around building several layers of neural networks. We split our project into a market-based (quantitative) analysis, and sentiment-based (qualitative) analysis. I was in charge of the quantitative part, and we involved feedforward NNs, genetic algorithms, and RNNs.


The pipeline proceeded as follows: Stock market data from Yahoo Finance was scraped using Beautiful Soup, then converted to np.array form. PyTorch would train on a historical period of stock information while withholding more recent data. Based on training data, tests could be performed, and accuracy scored.


On Demo Day, we showed the progress we had accumulated: a FFNN that could be adapted to predict stock movement across the board, as well as a GA (which another group member constructed) that indicated the level of risk a trader or institution was willing to take.


The project served more as an introduction and experimentation with ML than a deep, product-driven pursuit of a potential market competitor in stock prediction. Most of my progress and research centered on finding papers and scouting online resources to understand FFNNs, GAs, and RNNs. As I’m writing this article over a year since the project began, I’ve realized that there is no wise words of wisdom or hack that suddenly makes ML easy or inspiring. One simply has to keep digging and, as the process accumulates, nuggets of gold can be found.



Footnotes

1Losing $2k but gaining some virtues; see my article "Stocking Up on Trading" for more information.