谷歌官宣TensorFlow-GNN!动态和交互采样,大规模构建图神经网络
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">编辑:桃子</span></span></p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">【新智元导读】图神经网络诞生<span style="color: black;">败兴</span>得到广泛的应用,能将世界<span style="color: black;">区别</span>对象之间的关系<span style="color: black;">暗示</span>出来。今天,谷歌团队官宣发布TensorFlow-GNN 1.0,一个用于大规模构建GNN的经过生产测试的库。</h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">2005年,划时代之作「The Graph Neural Network Model」的<span style="color: black;">面世</span>,将图神经网络带到<span style="color: black;">每一个</span>人面前。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;"><span style="color: black;">这里</span>之前,<span style="color: black;">专家</span>处理图数据的方式是,在数据预处理<span style="color: black;">周期</span>,将图转换为一组「向量<span style="color: black;">暗示</span>」。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">而CNN的<span style="color: black;">显现</span>彻底改变这种信息丢失的<span style="color: black;">坏处</span>端,近20年来,一代又一代模型<span style="color: black;">持续</span>演变,推动ML<span style="color: black;">行业</span>进步。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">今天,谷歌正式官宣发布TensorFlow GNN 1.0(TF-GNN)——用于大规模构建GNN的经过生产测试的库。</span></span></p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-axegupay5k/f14010765bd940c8bcdf11d4b74de4c4~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1722931267&x-signature=XGVeGH3WZsgdbHQ90Aohc8lxcLw%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">它既支持在TensorFlow中的建模和训练,<span style="color: black;">亦</span>支持从大型数据存储中提取输入图。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">TF-GNN是专为异构图从头<span style="color: black;">起始</span>构建的,其中对象和关系的类型由<span style="color: black;">区别</span>的节点和边集合来<span style="color: black;">暗示</span>。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">现实世界中的对象及其关系以<span style="color: black;">区别</span>的类型<span style="color: black;">显现</span>,而TF-GNN的异构焦点,使得<span style="color: black;">暗示</span>它们变得非常自然。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">谷歌<span style="color: black;">专家</span>Anton Tsitsulin<span style="color: black;">暗示</span>,<span style="color: black;">繁杂</span>的异构建模又回来了!</span></span></p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-6w9my0ksvp/c8ba6673d84f43e29f6fa2f9c1e114d8~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1722931267&x-signature=4sXU2a9MTddIMyUntgCdTKk%2B9oo%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><strong style="color: blue;"><span style="color: black;">TF-GNN 1.0首面世</span></strong></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">对象及其相互之间的关系,在<span style="color: black;">咱们</span>的世界中无处不在。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">而关系<span style="color: black;">针对</span>理解一个对象的重要性,不亚于孤立地看待对象本身的属性,<span style="color: black;">例如</span>交通网络、生产网络、知识图谱或社交网络。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">离散数学和计算机科学<span style="color: black;">长时间</span><span style="color: black;">败兴</span><span style="color: black;">始终</span>将这类网络形式化为图,由「节点」以<span style="color: black;">各样</span>不规则方式<span style="color: black;">经过</span>边任意连接而成。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">然而,大<span style="color: black;">都数</span><span style="color: black;">设备</span>学习算法只<span style="color: black;">准许</span>输入对象之间存在规则统一的关系,如像素网格、单词序列,或完全<span style="color: black;">无</span>关系。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">图形神经网络,简<span style="color: black;">叫作</span>GNN,是一种强大的技术,既能利用图的连通性(如<span style="color: black;">初期</span>算法DeepWalk和Node2Vec),又能利用<span style="color: black;">区别</span>节点和边输入特征。</span></span></p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-6w9my0ksvp/03d374e7af614af89edbaf62b7514a1f~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1722931267&x-signature=SuXnY3NcvXVUQM%2BUqhFBzG1Ibmw%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">GNN<span style="color: black;">能够</span>对图的整体(这种分子<span style="color: black;">是不是</span>以某种方式做出反应?)、单个节点(<span style="color: black;">按照</span>引用,这份文档的主题是什么?)、潜在的边(这种<span style="color: black;">制品</span><span style="color: black;">是不是</span>可能与另一种<span style="color: black;">制品</span><span style="color: black;">一块</span>购买?)进行预测。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">除了对图形进行预测之外,GNN还是一个强大的工具——用于弥合与更典型的神经网络用例之间的鸿沟。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">它们以连续的方式对图的离散关系信息进行编码,从而<span style="color: black;">能够</span>将其自然地纳入另一个深度学习系统。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">谷歌在今天正式宣布用于大规模构建GNN的经过生产测试的库——TensorFlow GNN 1.0(TF-GNN)。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">在TensorFlow中,<span style="color: black;">这般</span>的图形由 tfgnn.GraphTensor 类型的对象<span style="color: black;">暗示</span>。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">这是一个复合张量类型(一个Python类中的张量集合),在 tf.data.Dataset 、 tf.function 等中被接受为「头等对象」。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">它既能存储图结构,<span style="color: black;">亦</span>能存储节点、边和<span style="color: black;">全部</span>图的特征。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">GraphTensors的可训练变换<span style="color: black;">能够</span>定义为高级Kera API中的Layers对象,或直接<span style="color: black;">运用</span> tfgnn.GraphTensor 原语。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><strong style="color: blue;"><span style="color: black;">GNN:对上下文中的对象进行预测</span></strong></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">接下来,进一步解释下TF-GNN,<span style="color: black;">能够</span>看下其中一个典型的应用:</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">预测一个庞大数据库中,由交叉引用表定义的图中某类节点的属性</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">举个例子,计算机科学(CS)的引文数据库arxiv论文中,有一对多的引用和多对一的引用关系,<span style="color: black;">能够</span>预测每篇论文的所在的主题<span style="color: black;">行业</span>。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">与大<span style="color: black;">都数</span>神经网络<span style="color: black;">同样</span>,GNN<span style="color: black;">亦</span>是在许多标记样本(约数百万个)的数据集上进行训练的,但<span style="color: black;">每一个</span>训练<span style="color: black;">过程</span>只<span style="color: black;">包括</span>一批小得多的训练样本(<span style="color: black;">例如</span>数百个)。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">为了扩展到数百万个样本,GNN会在底层图中<span style="color: black;">恰当</span>小的子图流上进行训练。<span style="color: black;">每一个</span>子图<span style="color: black;">包括</span>足够多的原始数据,用于计算中心标记节点的GNN结果并训练模型。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">这一过程,<span style="color: black;">一般</span>被<span style="color: black;">叫作</span>为子图采样,<span style="color: black;">针对</span>GNN训练是极其重要的。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">现有的大<span style="color: black;">都数</span>工具都是以批方式完成采样,生成用于训练的静态子图。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">而TF-GNN<span style="color: black;">供给</span>了,<span style="color: black;">经过</span>动态和交互采样来改进这一点的工具。</span></span></p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-6w9my0ksvp/afac5461a21542d49d5470967aa097e5~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1722931267&x-signature=78avh0LAG7x2mAxYzRtgiU5gmvs%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">子图抽样过程,即从一个<span style="color: black;">很强</span>的图中抽取小的、可操作的子图,为GNN训练创建输入示例</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">TF-GNN 1.0推出了灵活的Python API,用于配置所有<span style="color: black;">关联</span>比例的动态或批处理子图采样:在Colab笔记中交互采样。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;"><span style="color: black;">详细</span><span style="color: black;">来讲</span>,对存储在单个训练主机主内存中的小型数据集进行「<span style="color: black;">有效</span>采样」,或<span style="color: black;">经过</span>Apache Beam对存储在网络文件系统中的庞大数据集(多达数亿节点和数十亿条边)进行分布式采样。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">在这些相同的采样子图上,GNN的任务是,计算根节点的<span style="color: black;">隐匿</span>(或潜在)状态;<span style="color: black;">隐匿</span>状态聚集和编码根节点邻域的<span style="color: black;">关联</span>信息。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">一种<span style="color: black;">平常</span>的<span style="color: black;">办法</span>是「<span style="color: black;">信息</span>传递神经网络」。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">在每一轮<span style="color: black;">信息</span>传递中,节点沿着传入边接收来自邻节点的<span style="color: black;">信息</span>,并从这些边更新自己的<span style="color: black;">隐匿</span>状态。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">在n轮之后,根节点的<span style="color: black;">隐匿</span>状态反映了,n条边内所有节点的聚合信息(如下图所示,n=2)。<span style="color: black;">信息</span>和新的<span style="color: black;">隐匿</span>状态由神经网络的隐层计算。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">在异构图中,对<span style="color: black;">区别</span>类型的节点和边<span style="color: black;">运用</span>单独训练的<span style="color: black;">隐匿</span>层<span style="color: black;">一般</span>是有<span style="color: black;">道理</span>的。</span></span></p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-6w9my0ksvp/ab8946e0ce97493083b4fdc22cc7a188~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1722931267&x-signature=DP%2FgF29oUAGMAq%2FTo9mTrwn4c6U%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">图为一个简单的「<span style="color: black;">信息</span>传递神经网」,在该网络中,每一步节点状态都会从<span style="color: black;">外边</span>节点传播到内部节点,并在内部节点汇集计算出新的节点状态。一旦到达根节点,就<span style="color: black;">能够</span>进行<span style="color: black;">最后</span>预测</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">训练设置是,<span style="color: black;">经过</span>将输出层<span style="color: black;">安置</span>在已标记节点的GNN的<span style="color: black;">隐匿</span>状态之上、计算损失(以<span style="color: black;">测绘</span>预测误差)并<span style="color: black;">经过</span>反向传播更新模型权重来完成的,这在任何神经网络训练中都是<span style="color: black;">平常</span>的。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">除了监督训练之外,GNN<span style="color: black;">亦</span><span style="color: black;">能够</span>以无监督的方式训练,<span style="color: black;">能够</span>让<span style="color: black;">咱们</span>计算节点及其特征的离散图结构的连续<span style="color: black;">暗示</span>(或嵌入)。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;"><span style="color: black;">而后</span>,这些<span style="color: black;">暗示</span><span style="color: black;">一般</span>在其他ML系统中<span style="color: black;">运用</span>。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;"><span style="color: black;">经过</span>这种方式,由图编码的离散关系信息,就能被纳入更典型的神经网络用例中。TF-GNN支持对异构图的无监督<span style="color: black;">目的</span>进行细粒度规范。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><strong style="color: blue;"><span style="color: black;">构建GNN架构</span></strong></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">TF-GNN库支持构建和训练,<span style="color: black;">区别</span>抽象层次的GNN。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">在最高层,用户<span style="color: black;">能够</span><span style="color: black;">运用</span>与库绑定在<span style="color: black;">一块</span>的任何预定义模型,这些模型以Kera层<span style="color: black;">暗示</span>。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">除了<span style="color: black;">科研</span>文献中的一小部分模型外,TF-GNN还附带了一个高度可配置的模型模板,该模板<span style="color: black;">供给</span>了经过精心挑选的建模<span style="color: black;">选取</span>。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">谷歌<span style="color: black;">发掘</span>这些<span style="color: black;">选取</span>,为<span style="color: black;">咱们</span>的许多内部问题<span style="color: black;">供给</span>了强有力的基线。模板实现GNN层;用户只需从Kera层<span style="color: black;">起始</span>初始化。</span></span></p><span style="color: black;">import</span> tensorflow_gnn <span style="color: black;">as</span> tfgnn
<span style="color: black;">from</span> tensorflow_gnn.models <span style="color: black;">import</span> mt_albis
<span style="color: black;"><span style="color: black;">def</span> <span style="color: black;">model_fn</span><span style="color: black;">(graph_tensor_spec: tfgnn.GraphTensorSpec)</span>:</span>
<span style="color: black;">"""Builds a GNN as a Keras model."""</span>
graph = inputs = tf.keras.Input(type_spec=graph_tensor_spec)
<span style="color: black;"># Encode input features (callback omitted for brevity).</span>
graph = tfgnn.keras.layers.MapFeatures(
node_sets_fn=set_initial_node_states)(graph)
<span style="color: black;"># For each round of message passing...for _ in range(2):</span>
<span style="color: black;"># ... create and apply a Keras layer.</span>
graph = mt_albis.MtAlbisGraphUpdate(
units=<span style="color: black;">128</span>, message_dim=<span style="color: black;">64</span>,
attention_type=<span style="color: black;">"none"</span>, simple_conv_reduce_type=<span style="color: black;">"mean"</span>,
normalization_type=<span style="color: black;">"layer"</span>, next_state_type=<span style="color: black;">"residual"</span>,
state_dropout_rate=<span style="color: black;">0.2</span>, l2_regularization=<span style="color: black;">1e-5</span>,
)(graph)
<span style="color: black;">return</span> tf.keras.Model(inputs, graph)<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">在最低层,用户<span style="color: black;">能够</span><span style="color: black;">按照</span>用于在图中传递数据的原语,从头<span style="color: black;">起始</span>编写GNN模型,<span style="color: black;">例如</span>将数据从节点广播到其所有传出边,或将数据从其所有传入边汇集到节点中。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">当<span style="color: black;">触及</span>到特征或<span style="color: black;">隐匿</span>状态时,TF-GNN 的图数据模型对节点、边和<span style="color: black;">全部</span>输入图一视同仁。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;"><span style="color: black;">因此呢</span>,它不仅可以直接<span style="color: black;">暗示</span>像MPNN那样以节点为中心的模型,<span style="color: black;">况且</span>还<span style="color: black;">能够</span><span style="color: black;">暗示</span>更<span style="color: black;">通常</span>形式的的图网络。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">这<span style="color: black;">能够</span>(但不<span style="color: black;">必定</span>)<span style="color: black;">运用</span>Kera<span style="color: black;">做为</span>核心TensorFlow顶部的建模框架来完成。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><strong style="color: blue;"><span style="color: black;">训练编排</span></strong></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">虽然高级用户<span style="color: black;">能够</span>自由地进行定制模型训练,但TF-GNN Runner还<span style="color: black;">供给</span>了一种简洁的<span style="color: black;">办法</span>,在<span style="color: black;">平常</span><span style="color: black;">状况</span>下协调Kera模型的训练。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">一个简单的调用可能如下所示:</span></span></p><span style="color: black;">from</span> <span style="color: black;">tensorflow_gnn import runner</span>
<span style="color: black;">runner.run(</span>
<span style="color: black;">task</span>=<span style="color: black;">runner.RootNodeBinaryClassification("papers", ...),</span>
<span style="color: black;">model_fn</span>=<span style="color: black;">model_fn,</span>
<span style="color: black;">trainer</span>=<span style="color: black;">runner.KerasTrainer(tf.distribute.MirroredStrategy(), model_dir="/tmp/model"),</span>
<span style="color: black;">optimizer_fn</span>=<span style="color: black;">tf.keras.optimizers.Adam,</span>
<span style="color: black;">epochs</span>=<span style="color: black;">10,</span>
<span style="color: black;">global_batch_size</span>=<span style="color: black;">128,</span>
<span style="color: black;">train_ds_provider</span>=<span style="color: black;">runner.TFRecordDatasetProvider("/tmp/train*"),</span>
<span style="color: black;">valid_ds_provider</span>=<span style="color: black;">runner.TFRecordDatasetProvider("/tmp/validation*"),</span>
<span style="color: black;">gtspec</span>=<span style="color: black;">...,</span>
<span style="color: black;">)</span>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">Runner为ML Pain<span style="color: black;">供给</span>了现成的<span style="color: black;">处理</span><span style="color: black;">方法</span>,如分布式训练和云TPU上固定形状的 tfgnn.GraphTensor 填充。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">除了单一任务的训练(如上所示)外,它还支持多个(两个或<span style="color: black;">更加多</span>)任务的联合训练。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">例如,非监督任务<span style="color: black;">能够</span>与监督任务混合,以形成<span style="color: black;">拥有</span>特定于应用的归纳偏差的<span style="color: black;">最后</span>连续<span style="color: black;">暗示</span>(或嵌入)。调用方只需将任务参数替换为任务映射:</span></span></p><span style="color: black;">from</span> tensorflow_gnn <span style="color: black;">import</span> runner
<span style="color: black;">from</span> tensorflow_gnn.models <span style="color: black;">import</span> contrastive_losses
runner.run(
task={
<span style="color: black;">"classification"</span>: runner.RootNodeBinaryClassification(<span style="color: black;">"papers"</span>, ...),
<span style="color: black;">"dgi"</span>: contrastive_losses.DeepGraphInfomaxTask(<span style="color: black;">"papers"</span>),
},
...
)<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;"><span style="color: black;">另外</span>,TF-GNN Runner还<span style="color: black;">包含</span>用于模型归因的集成梯度实现。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">集成梯度输出是一个GraphTensor,其连接性与观察到的GraphTensor相同,但其特征用梯度值代替,在GNN预测中,<span style="color: black;">很强</span>的梯度值比较小的梯度值贡献<span style="color: black;">更加多</span>。</span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">总之,谷歌<span style="color: black;">期盼</span>TF-GNN将有助于推动GNN在TensorFlow中的大规模应用,并推动该<span style="color: black;">行业</span>的进一步创新。</span></span></p>
祝福你、祝你幸福、早日实现等。 你的见解真是独到,让我受益匪浅。
页:
[1]