Machine Learning and Deep Learning Algorithm Trading, Implementing Autoencoders with TensorFlow 2

Today, financial markets generate enormous amounts of data. This provides investors with more information, but at the same time, analyzing and utilizing that data effectively is becoming increasingly difficult. In such cases, machine learning and deep learning algorithms can be of help.

This article will explain how to implement an autoencoder using TensorFlow 2. An autoencoder is a type of unsupervised learning algorithm that compresses input data and reconstructs it. It is useful for understanding the characteristics of financial data and detecting abnormal patterns or outliers.

1. What is an Autoencoder?

An autoencoder works by encoding the input data into a lower dimension and then decoding it back to the original dimension. Typically, the dimension of the hidden layer is smaller than that of the input layer, allowing the network to learn the significant characteristics of the input data.

1.1 Basic Structure

The structure of an autoencoder can mainly be divided into three parts:

  • Encoder: Compresses the input data into a low-dimensional vector.
  • Decoder: Restores the compressed vector back to the original input data.
  • Loss Function: Measures the difference between the original input and the reconstructed output.

1.2 Applications of Autoencoders

Autoencoders can be utilized for various purposes such as:

  • Dimensionality Reduction
  • Noise Reduction
  • Anomaly Detection

2. How Autoencoders Work

Autoencoders encode the input and then decode the encoded representation to reconstruct the input. During this process, the network learns the important features of the input.

Below is a basic learning process of an autoencoder:


1. Pass the input data to the network.
2. The encoder compresses the input into a low dimension.
3. The decoder transforms the compressed data back to the original dimension.
4. Calculate the loss: the difference between the original input and the reconstructed output.
5. Weight adjustments in the neural network are made to reduce the loss.

3. Implementing an Autoencoder with TensorFlow 2

3.1 Environment Setup

First, you need to install TensorFlow 2 and the required packages. Execute the command below to install the necessary libraries.

pip install numpy pandas tensorflow matplotlib

3.2 Data Preparation

Now you need to load and preprocess the financial data to be used. Here, we will use simple stock price data as an example.


import pandas as pd

# Load data from CSV file
data = pd.read_csv('stock_data.csv')

# Select necessary columns (e.g., 'Close' price)
prices = data['Close'].values
prices = prices.reshape(-1, 1)  # Convert to 2D format

3.3 Defining the Autoencoder Model

Next, we define the structure of the autoencoder. We will implement both the encoder and the decoder.


import tensorflow as tf
from tensorflow.keras import layers, models

# Define the autoencoder model
def build_autoencoder():
    input_layer = layers.Input(shape=(1,))
    
    # Encoder part
    encoded = layers.Dense(32, activation='relu')(input_layer)
    encoded = layers.Dense(16, activation='relu')(encoded)
    
    # Decoder part
    decoded = layers.Dense(32, activation='relu')(encoded)
    decoded = layers.Dense(1, activation='linear')(decoded)
    
    autoencoder = models.Model(input_layer, decoded)
    return autoencoder

# Create the model
autoencoder = build_autoencoder()
autoencoder.compile(optimizer='adam', loss='mean_squared_error')

3.4 Training the Model

To train the model, we will split the data into training and testing sets and then train the model.


from sklearn.model_selection import train_test_split

# Split into training and testing sets
X_train, X_test = train_test_split(prices, test_size=0.2, random_state=42)

# Train the model
history = autoencoder.fit(X_train, X_train,
                          epochs=100,
                          batch_size=32,
                          validation_data=(X_test, X_test))

3.5 Visualizing the Results

You can visualize the training process of the model to observe the changes in loss.


import matplotlib.pyplot as plt

plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.title('Autoencoder Model Loss')
plt.show()

3.6 Performing Anomaly Detection

Using the model, you can detect outliers in the input data. After making predictions for the test data, you can calculate the differences compared to the original data.


# Perform predictions
predicted = autoencoder.predict(X_test)

# Calculate reconstruction errors
reconstruction_error = tf.reduce_mean(tf.square(X_test - predicted), axis=1)

# Set a threshold and detect anomalies
threshold = 0.1  # Adjust this value as needed
anomalies = reconstruction_error > threshold

# Print indices of detected anomalies
print("Detected anomalies at indices:", tf.where(anomalies).numpy().flatten())

4. Advantages and Disadvantages of Autoencoders

4.1 Advantages

  • Unsupervised Learning: Can learn from unlabeled data.
  • Feature Extraction: Automatically learns important data patterns.
  • Provides faster training times with a more concise structure compared to other models.

4.2 Disadvantages

  • Overfitting: Can overfit when there is a small amount of data.
  • Reconstruction Quality: It can be difficult to reconstruct high-dimensional data appropriately.

5. Conclusion

Through this article, we have learned about the implementation and applications of autoencoders using TensorFlow 2. Autoencoders can be a useful tool in financial data analysis, helping to understand the main features of data and detect outliers.

In the future, it may be beneficial to expand autoencoders into more complex structures to experiment with deep learning models or apply them to various financial data. The influence of machine learning and deep learning in the financial sector is rapidly increasing, allowing for the development of more efficient trading strategies.

Finally, utilizing the learned autoencoders to develop actual trading strategies and pursue potential profits can also be a rewarding challenge.

Machine Learning and Deep Learning Algorithm Trading, Implementation of DDQN Using TensorFlow 2

1. Introduction

Due to the complexity and volatility of financial markets, trading strategies are evolving day by day. In particular, with the application of machine learning and deep learning technologies to trading strategies, investors can utilize more data and information than ever to make optimal decisions. In this course, we will explore how to implement an algorithmic trading system using DDQN (Double Deep Q-Network), a reinforcement learning technique. This course will introduce how to implement DDQN using the TensorFlow 2 library and apply it to real stock trading data.

2. Overview of DDQN (Double Deep Q-Network)

DDQN is a variant of Q-learning (a type of reinforcement learning) designed to overcome the limitations of the existing DQN (Deep Q-Network). DQN uses a single Q-value to find the maximum reward, which leads to the problem of overestimation. DDQN addresses this issue by utilizing two neural networks to compute Q-values.

The structure of DDQN is similar to that of existing DQNs, but it evaluates the optimal action values more accurately through two networks—main network and target network. By doing so, it maintains a more stable learning process and provides better results. Due to these advantages of DDQN, it can be effectively used in financial markets.

3. Environment Setup

3.1. Installing Required Libraries

We need to install several libraries to build our machine learning model. The libraries that will be primarily used are as follows:

pip install numpy pandas matplotlib tensorflow gym

3.2. Collecting Trading Data

To train the DDQN model, appropriate stock trading data is required. You can collect data using various data sources, such as Yahoo Finance, Alpha Vantage, and Quandl. For example, you can collect data using the familiar yfinance library.

import yfinance as yf
data = yf.download("AAPL", start="2010-01-01", end="2020-01-01")

4. Implementing the DDQN Model

4.1. Setting Up the Environment

Let’s set up the environment for implementing DDQN. The environment can be implemented through OpenAI’s Gym library. The basic structure is as follows:

import gym

class StockTradingEnv(gym.Env):
    def __init__(self, data):
        super(StockTradingEnv, self).__init__()
        self.data = data
        self.current_step = 0
        self.action_space = gym.spaces.Discrete(3) # Hold, Buy, Sell
        self.observation_space = gym.spaces.Box(low=0, high=1, shape=(1, len(data.columns)), dtype=np.float32)

    def reset(self):
        self.current_step = 0
        return self.data.iloc[self.current_step].values

    def step(self, action):
        ...

4.2. Building the DQN Network

The DQN network consists of an input layer, hidden layers, and an output layer. The code below shows the structure of a basic DQN network:

import tensorflow as tf

def create_model(state_size, action_size):
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Dense(24, input_dim=state_size, activation='relu'))
    model.add(tf.keras.layers.Dense(24, activation='relu'))
    model.add(tf.keras.layers.Dense(action_size, activation='linear'))
    model.compile(loss='mse', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001))
    return model

4.3. Building the DDQN Training Loop

We will construct a loop for training DDQN. This loop will include important concepts of DDQN, such as experience replay and target network updates.

import random
from collections import deque

class Agent:
    def __init__(self, state_size, action_size):
        self.state_size = state_size
        self.action_size = action_size
        self.memory = deque(maxlen=2000)
        self.gamma = 0.95  # discount rate
        self.epsilon = 1.0  # exploration rate
        self.epsilon_min = 0.01
        self.epsilon_decay = 0.995
        self.model = create_model(state_size, action_size)
        self.target_model = create_model(state_size, action_size)

    def act(self, state):
        ...
    
    def replay(self, batch_size):
        ...
        
    def update_target_model(self):
        self.target_model.set_weights(self.model.get_weights())

5. Model Evaluation and Optimization

5.1. Performance Evaluation

To evaluate the performance of the DDQN model, you can use financial metrics such as return, Sharpe ratio, and more. After actually generating the model, you can analyze investment performance through the following metrics.

def evaluate_model(model, test_data):
    ...

5.2. Hyperparameter Tuning

To maximize the model’s performance, hyperparameter tuning is essential. Explore optimal hyperparameters using techniques such as random search and grid search.

from sklearn.model_selection import ParameterGrid

params = {'batch_size': [32, 64], 'epsilon_decay': [0.995, 0.99]}
grid_search = ParameterGrid(params)
for param in grid_search:
    ...

6. Conclusion

This course explained how to use DDQN to implement an algorithmic trading system based on machine learning and deep learning. DDQN can be effectively used to find viable strategies in complex environments such as stock trading. The potential application of artificial intelligence in the financial sector is endless, so continue to research and experiment.

I hope this course helps you develop more effective trading strategies in the financial market through DDQN. If you have any additional questions or need assistance, please feel free to reach out.

© 2023 QT Blog. All rights reserved.

Machine Learning and Deep Learning Algorithm Trading, NLP Pipeline from Text to Tokens

Introduction

In recent years, machine learning (ML) and deep learning (DL) have played an innovative role in solving complex problems such as algorithmic trading in financial markets. With the combination of natural language processing (NLP) technology, traders and investors can develop more sophisticated strategies based on insights and data provided by models. This article will conduct an in-depth discussion on algorithmic trading based on machine learning and deep learning, and will detail the process of tokenizing text data through the NLP pipeline.

1. Overview of Machine Learning and Deep Learning

Machine learning and deep learning are fields of artificial intelligence (AI) that have the ability to learn from data and make predictions. Machine learning is a technology that finds patterns in data and trains models based on them. In contrast, deep learning enables multilayered pattern recognition using neural networks. Both technologies are essential for developing predictive and automated trading strategies in the financial market.

1.1 Basic Concepts of Machine Learning

Machine learning can be broadly classified into three types:

  • Supervised Learning: Learns from labeled data and is used to solve classification and regression problems.
  • Unsupervised Learning: Utilizes unlabeled data to understand the structure of the data or perform clustering.
  • Reinforcement Learning: A technique where an agent learns to maximize rewards by interacting with the environment.

1.2 Basic Concepts of Deep Learning

Deep learning solves complex problems through artificial neural networks composed of multiple layers. Generally, neural networks consist of an input layer, hidden layers, and an output layer. As the number of hidden layers and neurons in each layer increases, the model’s expressiveness increases, but there is a risk of overfitting, so appropriate regularization techniques must be applied.

2. Overview of Algorithmic Trading

Algorithmic trading is the use of algorithms to automatically make trading decisions to potentially maximize returns in financial markets. Algorithms analyze market data, news, technical indicators, and generate trading signals.

2.1 Advantages of Algorithmic Trading

  • Speed: Can analyze data and execute trades much faster than human traders.
  • Accuracy: Trading systems based on quantitative models provide objective judgments, free from emotional decisions.
  • Consistency: Maintains trading consistency by making the same decisions under the same conditions.

3. Data Collection and Preprocessing

The performance of an algorithmic trading system heavily depends on the quantity and quality of the collected data. Market data is gathered from various sources, and text data can be obtained from news, social media, and financial reports. The stages of collecting and preprocessing this data are very important.

3.1 Financial Data Collection

Financial data can be easily collected through APIs, with many services like Yahoo Finance, Alpha Vantage, and Quandl available. Collecting data is essential not only for model training but also for backtesting.

3.2 Text Data Collection

Text data is collected from various sources such as articles from financial news, blog posts, and forum discussions. This can be done using crawling techniques, and libraries like BeautifulSoup and Scrapy in Python can be used to automate the process.

3.3 Data Preprocessing

Collected data often requires a cleaning process. Missing values need to be handled, duplicate data removed, and each data point converted into a consistent format. For example, trading data should be converted to a time unit, while text data should be cleaned to remove unnecessary information.

4. Building an NLP Pipeline

Natural language processing (NLP) is a technology that enables machines to understand and interpret human language. In algorithmic trading, NLP is used to analyze text data from news articles, social media feeds, and corporate financial reports to gauge market sentiment. The key steps in an NLP pipeline include:

4.1 Text Cleaning

Before analyzing text data, a cleaning process is needed. Cleaning includes the following steps:

  • Lowercase conversion: Converts uppercase letters to lowercase to maintain consistency.
  • Special character removal: Removes unnecessary symbols and characters from the text.
  • Stopword removal: Eliminates common words that do not carry significant meaning (e.g., ‘this’, ‘that’, ‘is’, ‘the’, etc.) to highlight important information.
  • Stemming and Lemmatization: A process of finding the base form of a word, for example, unifying ‘running’, ‘ran’, and ‘runs’ to ‘run’.

4.2 Text Tokenization

Tokenization refers to the process of dividing continuous text data into individual units (tokens). This is primarily divided into word-based or sentence-based tokenization and is necessary for models to convert text into numerical form. Libraries like NLTK and SpaCy in Python can be used.

4.3 Word Embeddings

Word embeddings convert words into vectors in a way that machines can understand, primarily using techniques like Word2Vec, GloVe, or FastText. This process maintains the semantic relationships between words, providing effective input data for deep learning models.

4.4 Sentiment Analysis

Sentiment analysis is a technique for determining the sentiment of text data, which is very useful in algorithmic trading. It categorizes sentiments as positive, negative, or neutral to support investment decisions. Machine learning models (e.g., logistic regression, SVM) can be used for sentiment analysis, and transformer models like BERT are increasingly popular.

4.5 Key News Extraction and Summarization

Major financial news that occurs at regular intervals can impact trading strategies. In this regard, text summarization techniques can condense lengthy news articles, conveying essential information to the trading algorithm. This allows the algorithm to enhance strategies based on important factors.

5. Training and Evaluating Machine Learning Models

Once the processed data is ready, the next step is to train machine learning and deep learning models. This process involves learning from the data, recognizing patterns, and predicting future outcomes.

5.1 Data Splitting

Before training a model, the data must be split into training, validation, and test sets. Typically, 70% of the data is used for training, 15% for validation, and the remaining 15% for testing.

5.2 Model Selection

Various machine learning models can be selected, with representative examples including:

  • Linear Regression
  • Decision Tree
  • Random Forest
  • Gradient Boosting
  • Neural Networks

Each model may perform better on specific types of data or problems, so selection should be made according to the context.

5.3 Model Training

Train the selected model using the training set. Hyperparameter tuning (such as greedy search initialization) may be required during this process. Cross-validation should be conducted to evaluate the model’s generalization performance.

5.4 Model Evaluation

To assess the performance of the trained model, various metrics can be used. Commonly used metrics include Precision, Recall, F1 Score, and ROC-AUC. The performance of the model should ultimately be evaluated using the test set.

6. Establishing Algorithmic Trading Strategies

The final step is to establish actual trading strategies based on the trained models. Buy and sell signals are set according to the model’s predictions, while managing the portfolio.

6.1 Generating Trading Signals

Based on the predictions generated by the model, buy or sell decisions are made. For example, if there is an increase in positive sentiment-related news for a specific stock, a buy signal can be generated.

6.2 Risk Management

Risk management in trading is very important. Techniques such as setting loss limits, capital allocation strategies, and portfolio diversification can be utilized. This helps to minimize losses and maximize profits.

6.3 Backtesting and Performance Evaluation

The constructed strategy is backtested using historical data to evaluate its performance. Backtesting results help confirm the effectiveness of the strategy, and modifications can be made as necessary.

Conclusion

Algorithmic trading utilizing machine learning and deep learning technologies enhances the accuracy of data analysis and aids in making better decisions. By effectively processing and analyzing text data through the NLP pipeline, investors and traders can increase their chances of success in the market with information-based decision-making.

This course has reviewed the entire process from the basics of machine learning and deep learning to the establishment of trading strategies. The technologies of machine learning and deep learning will continue to advance in the financial industry, and continuous learning in this area is essential.

Machine Learning and Deep Learning Algorithm Trading, Embedding Visualization Using TensorBoard

October X, 2023 | Author: [Your Name]

Introduction

Automated trading in financial markets has gained additional significance with the advancements of machine learning and deep learning technologies. As the volume and complexity of data increase, machine learning-based trading algorithms are emerging that can provide insights difficult to obtain through traditional methods. This course will explore trading strategies using machine learning and deep learning algorithms, and explain how to utilize TensorBoard for visualizing the embedding space.

1. Basic Concepts of Machine Learning and Deep Learning

Machine Learning (ML) is the technology of creating algorithms that can learn patterns from data to make predictions or decisions. On the other hand, Deep Learning (DL) is a subfield of machine learning based on artificial neural networks, capable of handling more complex and large-scale problems. Each of these technologies plays an essential role in building automated trading systems, contributing to generating trading signals and maximizing performance.

1.1 Machine Learning Algorithms

Machine learning algorithms can be broadly categorized into supervised learning, unsupervised learning, and reinforcement learning. The most commonly used method in trading algorithms is based on supervised learning models. The model is trained through learning from specific input data (e.g., historical stock prices) and the corresponding output (e.g., buy/sell signals).

1.2 Deep Learning Algorithms

Deep learning algorithms utilize neural networks composed of multiple layers of neurons to learn more complex patterns. CNNs (Convolutional Neural Networks) are suitable for image data, while RNNs (Recurrent Neural Networks) or LSTMs (Long Short-Term Memory networks) are ideal for time series data, widely used for stock market predictions.

2. Theoretical Basis of Algorithmic Trading

Algorithmic trading is based on advanced mathematics and statistics to model market movements. These mathematical models generally include time series analysis, regression analysis, probabilistic models, and optimization techniques.

2.1 Time Series Analysis

Time series analysis is used to understand data over time, such as stock prices. It is useful for predicting future price trends based on historical data. While traditional time series models like ARIMA exist, recent models enhance the accuracy of these predictions through machine learning techniques.

2.2 Reinforcement Learning

Reinforcement learning aims for agents to learn optimal behavior strategies through interaction with the environment. In trading, strategies that maximize the value of financial assets can be learned through choices such as buying, selling, and holding.

3. Environment Setup and Data Collection

Setting up an environment for algorithmic trading is very important. We will explore the process of collecting necessary software and price data.

3.1 Development Environment

Python is the most widely used programming language in the fields of machine learning and deep learning. Django and Flask are useful for building web applications, while libraries like Pandas, NumPy, and Scikit-learn are essential for data processing and implementing machine learning models.

3.2 Data Collection

Data needed for trading can be collected through APIs like Yahoo Finance, Alpha Vantage, and Quandl. In addition to price information, various variables such as financial statements, news, and social media data can also be considered.

4. Model Building and Training

This stage involves building and training machine learning and deep learning models based on the collected data. We will explain how to evaluate model performance and optimize it through hyperparameter tuning.

4.1 Data Preprocessing

Preprocessing data is essential to maximize the performance of machine learning models. It is crucial to improve data quality through methods like handling missing values, normalization, and feature selection.

4.2 Model Training

You can train various machine learning models (e.g., Random Forest, SVM) using scikit-learn, and build neural networks using Keras and TensorFlow. Techniques for evaluating model performance will also be introduced in this stage.

5. Embedding Visualization via TensorBoard

TensorBoard is a visualization tool provided by TensorFlow that is useful for systematically tracking the training process visually. It plays a crucial role in monitoring the learning process of machine learning and visualizing results during deep learning model training.

5.1 Getting Started with TensorBoard

We will explain the required installation and setup methods for using TensorBoard. After installing TensorFlow, prepare to visualize in TensorBoard by generating log files.

5.2 Embedding Visualization

During the training process of deep learning models, we visualize embeddings to understand relationships between data points. Techniques like PCA (Principal Component Analysis) or t-SNE (t-Distributed Stochastic Neighbor Embedding) can reduce the structure of high-dimensional data to 2D or 3D for visualization.

5.3 Practical Example

We will build a simple deep learning model using TensorFlow and Keras, and explain how to extract embeddings during the training process and visualize them in TensorBoard step by step. You can run the code and observe the results to visually see the changes.

Conclusion

This course has helped you gain a basic understanding of building automated trading systems using machine learning and deep learning, and learned methods to explore relationships between data through embedding visualization. The importance of algorithmic trading in future financial markets will continue to grow, and data-driven decision-making is expected to become an essential element. I hope that you develop your own trading strategies through continuous learning and experimentation.

Author: [Your Name]

Contact: [Your Email]

Machine Learning and Deep Learning Algorithm Trading, From Machine Learning Using Text to Features

1. Introduction

In recent years, there has been a rapid increase in the adoption of machine learning and deep learning in the financial markets.
These technologies are driving the growth of algorithmic trading and are being utilized across various asset classes such as stocks, bonds, foreign exchange, and cryptocurrencies.
This article will explore machine learning and deep learning in algorithmic trading in detail,
and also investigate the potential of machine learning using text data.

2. Basic Concepts of Machine Learning and Deep Learning

2.1 What is Machine Learning?

Machine learning is a set of algorithms and technologies that analyze data to learn patterns,
and make predictions or decisions based on that learning.
Essentially, machine learning creates models that learn from data and validate them to perform specific tasks.

2.2 What is Deep Learning?

Deep learning is a subset of machine learning that uses artificial neural networks to process deeper layers of data,
learning complex patterns. It analyzes data through multiple layers of neurons and
exhibits high performance in fields such as image recognition, natural language processing (NLP), and speech recognition.

3. Algorithmic Trading

3.1 Definition of Algorithmic Trading

Algorithmic trading refers to the method of trading financial products using computer programs that follow predefined rules.
In this process, data analysis and modeling are essential, allowing it to gain an advantage in efficiency and speed over human traders.

3.2 Algorithmic Trading Using Machine Learning

Algorithmic trading leveraging machine learning techniques involves learning models based on historical data to predict market changes.
It is used in various areas such as stock price prediction, portfolio optimization, and risk management.
In particular, it shows strengths in predicting market trends by analyzing unstructured data such as news articles and social media.

4. Machine Learning Using Text Data

4.1 Importance of Text Data

Various types of text data exist in the financial markets, playing a crucial role in data analysis and predictions.
Information collected from news, reports, social media posts, and company disclosures can significantly impact the value of those assets.
Machine learning models can utilize this text data to understand market sentiment and refine prediction models.

4.2 Text Data Processing Steps

Several steps are required to input text data into machine learning algorithms.
These steps include:

  1. Text Collection: Collecting necessary data through web scraping, API calls, etc.
  2. Preprocessing: Cleaning the data through removing stop words, normalization, morphological analysis, etc.
  3. Feature Engineering: Creating features that can aid in analysis.
  4. Modeling: Selecting and training appropriate machine learning or deep learning models.
  5. Evaluation: Evaluating model performance, identifying areas for improvement, and continuously upgrading the model.

5. Case Studies of Machine Learning Models

5.1 News Sentiment Analysis

It is possible to develop models that analyze the sentiment of news articles to support investment decisions.
Positive news can be analyzed for its impact on stock prices, serving as a buy signal, while
negative news can be converted into a sell signal. This is a crucial element in understanding market sentiment.

5.2 Analyst Report Analysis

Models can also be developed to help evaluate the value of specific stocks by analyzing the opinions and reports of analysts.
Using natural language processing (NLP) techniques, insights from past reports can be learned,
enabling predictions of future stock prices.

5.3 Social Media Analysis

Analyzing mentions of specific assets on social media platforms such as Twitter and Facebook,
can yield models that predict stock price fluctuations.
Reactions on social media are one of the factors that can impact the market in real time.

6. Conclusion

Algorithmic trading using machine learning and deep learning has become an important tool in enhancing competitiveness in the financial markets.
The process of analyzing market sentiment through text data and developing models that support investment decisions plays a significant role in understanding market complexity.
In the future, these technologies will continue to evolve and will have a substantial impact on both investors and traders.

7. References

– “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow” by Aurélien Géron
– “Deep Learning for Time Series Forecasting” by Jason Brownlee
– “Machine Learning for Asset Managers” by Marcos López de Prado