CIKM 2021 | Picture Nerver Network on the heterogeneous chart

Figure neural network

It is a representative method in the field of learning. It is necessary to label data to ensure the performance of the model, and in the real system, there is usually a large-scale non-label data, resulting in a limited performance. To this end, an intuitive idea is to design GNN’s pre-training strategy, learn to migrate knowledge from the general structural properties of the figure. Most of the current pre-training strategies are designed for homogeneism, with each node and edges belong to the same type, and in actual systems are usually heterogeneous, multiple types of nodes are associated with different types of edges. , Rich in semantic information. Existing models are difficult to effectively build symptom information.

In this article, we put forward a kind ofCompare pre-training strategy of neural network on isomechanical map CPT-HG

To capture semantics and structural properties in self-supervision. Specifically, we have designed a pre-training task in relationship grades and elemental levels and further enhance their representative by comparing learning. exist

Relationship level, By distinguishing the simplest heterogeneous view to capture the corresponding semantic information;Chart level, Construct a different element diagram instance to capture the corresponding semantic information.

Paper Name:

Contrastive pre-training of gnns on Heterogeneous Graphs

Meeting: CIKM2021

Paper link:

In recent years, the map has become an abstraction that represents a variety of real world data sets. As a diagram structure data, the emerging tool for machine learning, the nerve network (GNN) is to learn powerful diagrams by recursively polymerizing the contents of adjacent nodes (ie, features or embedded), thereby retaining content and structural information. They have proven to improve performance of various graph applications, such as nodes and diagrams, recommendations, and graphs. In general, the GNN model is training using (semi) supervising information, and different downstream tasks require a large number of markup data. However, in most realistic scenarios, a large number of marking data is usually costly. In order to make full use of unmarkable graph structure data, the recent part works inspiration from the recent natural language processing and computer vision, and proposes a pre-trained GNN model on the map. Although these GNN pre-training methods have achieved good performance, they are designed for the same composition, each of which belongs to the same type. In contrast, existing strategies ignore heterogeneous diagrams, where multiple types of nodes interact through different types.

The network in real life can constitute a heterogeneous map, which reflects a rich semantic and composed of a variety of types of nodes and unique structures generated. As shown in Fig. 1 (a), a simple heterogeneous map is constructed for bibliographic data, which is composed of nodes of authors, papers, conferences and terminology, and authors, papers and the terms of the terms of the paper. Different types of nodes or edges usually exhibit different network properties, such as degree, and cluster coefficients. For example, the meeting node is usually higher than the author node. In addition, this isomer has also produced more complex semantic contexts, involving multiple relationships between multiple nodes, for example, describes the semantic context of "two authors of similar themes". In addition to a simple example,Heterogeneous mapIn many fields, it is also common, such as e-commerce interacting in various ways, for example, in various ways, and in biology associated with diseases, proteins and drugs. Taking into account their universality, it is important for the GNN pre-training strategy for heterogeneous map design.

In this article, we put forward a comparative pre-training program, which not only considers the difference between a single node, but also retains high-order semantics between multiple nodes. More specifically, this articleA pre-training task is designed to distinguish between different types of two nodes.(For example, author – papers and papers – meeting relationships) come to the foundation of the downstream task coding. Inspired by comparative learning [42], in order to enhance the representation of the sample, this paper constructs negative relationship grade samples from two aspects:

FromInconsistencyThe negative sample, two of which are different from the positive sample;

2. FromIrregular nodeNegative samples, two of which have no links at all in the figure.

At the same time, this paper proposes a subgraph pre-training task on a heterogeneous map, and a sub-map instance is used to generate a sub-map instance for comparison, so it is possible to encode information encoding for high-order semantics related to different upper and downstream tasks.

In this section, the pre-training model corresponding to this article will be introduced from two aspects of pre-training tasks, respectively:Analysis from the edge of the relationship and the meta level.

2.1 Relationship level pre-training task

For a given correct case, node set, and through relationships on the heterogeneous map constitute a corresponding example. Here, it is a three-way group before the pre-training. For negative samples, we build in two ways, one is inconsistent relationship, one is a non-connected node.

Inconsistent relationshipFor a given timing, there is a relationship between a node and nodes through an inconsistent relationship. Therefore, the corresponding negative sample example is constructed to represent an inconsistent relationship of the three-tuple, by indicating:

Since the scale is relatively large, the method in this paper will randomly sampling the collections to build a negative learning to prepare a neural network model. The corresponding loss function is:

Among them, the relationship of the learning weight matrix

Not connected nodeBased on the previous work, this paper provides a simple negative sampling scheme, which is directly sampled without a K-jet node to be used as a negative sample. In order to ensure the quality of the negative sample, the node selected as the selected node is used as the negative sample. The corresponding loss function is:

Therefore, for the pre-training task, the overall loss function is:

2.2 Subgraph level pre-training tasks

In order to capture the high-order information of the model, a natural idea is to use the Yuan path to explore high-order relationships. However, the extensive use of the Yuan path pre-trained GNN on the isomerial chart.Two weaknesses:

WithChartCompared to the fact that the energy path is characterized by rich semantics and extracting high-order structures;

2. FromSource nodeStart, the number of nodes that can be reached in the Yuan path can be too large, and the number of nodes from the same source node can be covered, because its structure is more complicated and more restrictive, which makes the metamodogram more efficient.

Therefore, this article is consideredChartTo capture high-order information.

Structural sampleFor a given element diagram and source node, a chart instance is constructed as a collection of metamod maps of the node, which is represented as a collection of all instances of the element diagram. Therefore, the sample based on the graph is as follows:

Negative sample queueIn order to build a negative sample of a metapogram, a dynamic queue is used to maintain a negative sample collection. Because of the real-time sampling negative samples, specifically, based on the previous positive samples during the training process, this article adds the nearest positive sample and remove the earliest queue end to generate negative samples.

Therefore, in order to capture high-order semantic information, this part of the model source node and the corresponding positive and negative sample, the corresponding loss function:

In order to consider two pre-training tasks, this paper achieves the corresponding effect by the following losses:

Third, experiment

3.1 Link Forecast

The following tableDemonstrate all methods on link prediction tasksThe last line represents the increase in the proposed method relative to the existing method. It can be seen that the model is about 2% relative to the relative improvements of the existing optimal baseline on all data sets.These indicators verify the validity of the proposed model.

3.2 Node Classification

The following table shows all the performance on the node classification task, and the last line represents the increase in the proposed method relative to the existing method. It can be seen that the model is about 1% around all DBLP and Aminer data sets, relative to the relative improvement of the existing optimal baseline.These indicators verify the effectiveness of the proposed model

3.3 ablation experiment

By replacing different baseline map neural network models, it can be seen that the good results can be achieved relative to the unpredictable model. The pre-training GAT does not have a satisfactory performance because it is difficult to learn normal attention between the pre-training and fine tuning chart, soModel performance is worse than the model without pre-trained.

This article comes from:]

Author: Jiang Xiangqiang

IllustrastionBY Dmitry Nikunikov from ICons8

-The End-

New this week!

About me "door"

▼ ▼

The door is a new type of venture company that focuses on the discovery, acceleration and investment technology drive type entrepreneurial, covering the innovation service, willing the door technology community, and the Women’s Ventures.

At the end of 2015, the founding team was constructed by Microsoft Venture Investment in China’s founding team.

If you are a start-up business in the technical field, not only want to get investment, but also hope to get a series of continuous, valuable post-service services.

Welcome to send me "door" to me:

? One button to send you to Techbeat Happy Planet

Heavenly Medical Cloud Wisdom Medical Reproduction, Artificial Intelligence Accurate Embolic Medical Image

With the continued growth in medical market in my country, the artificial intelligence technology is continuously improved. The speed of medical industry has accelerated. In 2016-2020, China’s medical manual intelligence market scale increased by 43.4%, of which 2020 market size reached 40.8 billion yuan, year-on-year growth 47.3%, calculated according to the composite growth rate of 40.0%, and the size of China’s medical man-labor intelligence market will reach 219.43 billion yuan in 2025, the potential market space is huge. Future artificial intelligent medical track will still present a bright splendid scene. Today, we talk from the intelligent medical perspective to talk about the artificial intelligence of the image.

?

?

The AI ??image has become an important aid for the diagnosis of medical examinations. A medium-sized medical examination center, averages 300-400 people per day. DR chest radiography is of great significance for the screening of lung disease, and chest radiograph is almost inspected in a common medical examination project. A imaging doctor, one day, most of the 300-400 people is normal image, and it is conceivable that how big is the work intensity!

?

?

To this end, Tiandi Wisdom Medical has been developed – a full-chain AI chest image intelligent analysis system, and the video department can assist the doctor through the full chain AI service to use the cloud PACS to quickly screen, greatly enhance the doctor’s diagnostic efficiency. The full chain AI chest imaging intelligent analysis system includes acquisition control AI, image analysis AI, video diagnosis AI, department management AI, auxiliary doctor from shooting into diagnostic images.

?

?

Especially in the positive of the chest, the auxiliary diagnosis of AI auxiliary diagnosis is higher than the diagnosis and accuracy of the same industry. Tianzhi Yun full chain AI, for the four-character lung disease (including lung nodules, pulmonary edema, lung, pulmoniculture, etc.) and occupational diseases (tuberculosis, pneumonia, etc.) The maximum performance of 92.6% of diagnostic accuracy (AUC), sensitivity is 96.2% and the specific performance of 85.1%, which can greatly improve the efficiency and diagnostic quality of the chest X-ray image inspection.

?

?

Whole chain AI chest imaging intelligent analysis system? Based on? Computer visual algorithm, large data training, deep learning of large data training, neural network, to integrate AI? Into the full process of radiological imaging, realize intelligent retrieval, massive screening, precise diagnosis, Provide higher quality ultra-clear images for clinical, and realize automated measurement, quantified data accurate screening and multi-type lesion intelligent prompt function, effectively improve the quality, efficiency and benefits of clinical diagnosis, and strengthen the medical examination center in the same scale The competitiveness of the medical examination center, reduce the tabular doctor? Work negative? Mercedently and optimize the patient’s experience.

?

?

?

?

?

?

In the future, artificial intelligence will gradually enplace the medical industry, not just? Images, will gradually develop in a deeper direction. The Tianxian Cloud R & D team will continue to keep up with the needs of the market and doctors, continue to study, and provide more convenient services for more and more doctors and patients.