TY - GEN
T1 - Large Language Models for Graph Learning
AU - Ding, Yujuan
AU - Fan, Wenqi
AU - Huang, Xiao
AU - Li, Qing
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/5/13
Y1 - 2024/5/13
N2 - Graphs are widely applied to encode entities with various relations in web applications such as social media and recommender systems. Meanwhile, graph learning-based technologies, such as graph neural networks, are demanding to support the analysis, understanding, and usage of the data in graph structures. Recently, the boom of language foundation models, especially Large Language Models (LLMs), has advanced several main research areas in artificial intelligence, such as natural language processing, graph mining, and recommender systems. The synergy between LLMs and graph learning holds great potential to prompt the research in both areas. For example, LLMs can facilitate existing graph learning models by providing high-quality textual features for entities and edges, or enhancing the graph data with encoded knowledge and information. It may also innovate with novel problem formulations on graph-related tasks. Due to the research significance as well as the potential, the convergent area of LLMs and graph learning has attracted considerable research attention. Therefore, we propose to hold the workshop Large Language Models for Graph Learning at WWW’24, in order to provide a venue to gather researchers in academia and practitioners in the industry to present the recent progress on relevant topics and exchange their critical insights.
AB - Graphs are widely applied to encode entities with various relations in web applications such as social media and recommender systems. Meanwhile, graph learning-based technologies, such as graph neural networks, are demanding to support the analysis, understanding, and usage of the data in graph structures. Recently, the boom of language foundation models, especially Large Language Models (LLMs), has advanced several main research areas in artificial intelligence, such as natural language processing, graph mining, and recommender systems. The synergy between LLMs and graph learning holds great potential to prompt the research in both areas. For example, LLMs can facilitate existing graph learning models by providing high-quality textual features for entities and edges, or enhancing the graph data with encoded knowledge and information. It may also innovate with novel problem formulations on graph-related tasks. Due to the research significance as well as the potential, the convergent area of LLMs and graph learning has attracted considerable research attention. Therefore, we propose to hold the workshop Large Language Models for Graph Learning at WWW’24, in order to provide a venue to gather researchers in academia and practitioners in the industry to present the recent progress on relevant topics and exchange their critical insights.
KW - Fine-tuning
KW - Graph Learning
KW - In-context Learning
KW - Large Language Models
KW - Pre-training
KW - Prompting
UR - http://www.scopus.com/inward/record.url?scp=85194479780&partnerID=8YFLogxK
U2 - 10.1145/3589335.3641300
DO - 10.1145/3589335.3641300
M3 - Conference article published in proceeding or book
AN - SCOPUS:85194479780
T3 - WWW 2024 Companion - Companion Proceedings of the ACM Web Conference
SP - 1643
EP - 1646
BT - WWW 2024 Companion - Companion Proceedings of the ACM Web Conference
PB - Association for Computing Machinery, Inc
T2 - 33rd ACM Web Conference, WWW 2024
Y2 - 13 May 2024 through 17 May 2024
ER -