What are Graph Neural Networks?
GNNs will enable us to get better at making predictions for commercial value
I’m pretty curious about Graph Neural Networks since I covered a startup that seems to be utilizing them in a big way in prediction services for Enterprise companies in the future.
The startup called Kumo.AI looks promising. I noticed my LinkedIn post (intro) on them also got more attention than usual from serious A.I. industry leaders and venture capitalists.
So I guess my question is, what do they know about this trend that I do not?
I’m by no means an expert but let’s try to explore this together:
What is a Graph Neural Network?
Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between the nodes of graphs.
Relatively a newer deep learning method, GNNs can derive insights from graphs and predict outcomes with the information so gathered. A graph represents data with two components – nodes and edges; GNN can be applied to graphs to conduct node-level, graph-level, and edge-level analysis and predictions.
So in recent years, variants of GNNs such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking performances on many deep learning tasks.
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes.
Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations.
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications, where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects.
Are GNNs Dynamic Programmers?
Specifically, GNNs are claimed to align with dynamic programming (DP), a general problem-solving strategy which expresses many polynomial-time algorithms.
Graph neural networks (GNNs) can process graph-based information to make predictions.
Will GNNs form the basis of the most accurate predictions for Enterprise companies and startups that intersect real-world industries in a SaaS model?
One area that strikes me to have a lot of potential is specifically Few-Shot Learning with Graph Neural Networks. Graph neural network architecture that generalizes several of the recently proposed few-shot learning models could have enormous potential.
How have GNNs come to this point though? Graphs are a kind of data structure which models a set of objects (nodes) and their relationships (edges).
Recently, researches on analyzing graphs with machine learning have been receiving more and more attention because of the great expressive power of graphs, i.e. graphs can be used as denotation of a large number of systems across various areas including social science (social networks (Wu et al., 2020), natural science (physical systems (Sanchez et al., 2018; Battaglia et al., 2016) and protein-protein interaction networks (Fout et al., 2017)), knowledge graphs (Hamaguchi et al., 2017) and many other research areas (Khalil et al., 2017).
Mention of the “Social Graph”
Graphs are everywhere around us. Your social network is a graph of people and relations.
Every graph is composed of nodes and edges. For example, in a social network, nodes can represent users and their characteristics (e.g., name, gender, age, city), while edges can represent the relations between the users.
Most state-of-art approaches for incorporating graph structure information in neural network operation enforce similarity between the representations of neighboring nodes.
Recent advances in neural algorithmic reasoning with graph neural networks (GNNs) are propped up by the notion of algorithmic alignment. Broadly, a neural network will be better at learning to execute a reasoning task (in terms of sample complexity) if its individual components align well with the target algorithm.
Specifically, GNNs are claimed to align with dynamic programming (DP), a general problem-solving strategy which expresses many polynomial-time algorithms. Companies like Google are working on this.
There appears to exist an intricate connection between GNNs and DP, going well beyond the initial observations over individual algorithms such as Bellman-Ford. Exposing this connection, it’s easily verified by several prior findings in the literature, and it seems hopeful it will serve as a foundation for building stronger algorithmically aligned GNNs.
While GNNs operate on usual (normally sparse) graphs, Graph Transformers (GTs) operate on the fully-connected graph where each node is connected to every other node in a graph. In 2021, perhaps the two most visible graph transformer models were SAN (Spectral Attention Nets) and Graphormer.
Graph neural networks (GNN) are a type of machine learning algorithm that can extract important information from graphs and make useful predictions. With graphs becoming more pervasive and richer with information, and artificial neural networks becoming more popular and capable, GNNs have become a powerful tool for many important applications.
But How do GNNs make Predictions?
So graph neural networks can also learn from other information that the graph contains. The edges, the lines that connect the nodes, can be represented in the same way, with each row containing the IDs of the users and additional information such as date of friendship, type of relationship, etc.
Finally, the general connectivity of the graph can be represented as an adjacency matrix that shows which nodes are connected to each other.
When all of this information is provided to the neural network, it can extract patterns and insights that go beyond the simple information contained in the individual components of the graph.
Suffice to say the way we think of Graph Neural Networks is evolving and their utility appears to be increasing in our increasingly data-centric world.
What do you think of the potential of Graph Neural Networks?
If you want to support me so I can keep writing, please don’t hesitate to give me tips, a paid subscription or some donation. With a conversion rate of less than two percent, this Newsletter exists mostly by the grace of my goodwill (passion for A.I. and datascience) & a relative state of poverty as I pivot into the Creator Economy myself.
Anyways I hope you enjoyed the topic, that’s all for today.