數據集為cora數據集,cora數據集由機器學習論文組成,共以下7類:
由cora.content和cora.cities文件構成。共2708個樣本,每個樣本的特征維度是1433。
下載地址:https://linqs.soe.ucsc.edu/data
cora.content:
每一行由論文id+特征向量+標簽構成。
31336 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 Neural_Networks1061127 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Rule_Learning1106406 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Reinforcement_Learning13195 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Reinforcement_Learning......
cora.cities:
引用關系:被引論文編號以及引論文編號
35 103335 10348235 10351535 105067935 110396035 110398535 110919935 1112911......
讀取數據集:
import numpy as npimport scipy.sparse as spimport torchfrom sklearn.preprocessing import LabelBinarizerdef normalize_adj(adjacency): adjacency += sp.eye(adjacency.shape[0]) degree = np.array(adjacency.sum(1)) d_hat = sp.diags(np.power(degree, -0.5).flatten()) return d_hat.dot(adjacency).dot(d_hat).tocoo()def normalize_features(features): return features / features.sum(1)def load_data(path="/content/drive/My Drive/nlpdata/cora/", dataset="cora"): """Load citation network dataset (cora only for now)""" print('Loading {} dataset...'.format(dataset)) idx_features_labels = np.genfromtxt("{}{}.content".format(path,dataset), dtype=np.dtype(str)) features = sp.csr_matrix(idx_features_labels[:, 1:-1], dtype=np.float32) encode_onehot = LabelBinarizer() labels = encode_onehot.fit_transform(idx_features_labels[:, -1]) # build graph idx = np.array(idx_features_labels[:, 0], dtype=np.int32) idx_map = {j: i for i, j in enumerate(idx)} edges_unordered = np.genfromtxt("{}{}.cites".format(path, dataset), dtype=np.int32) edges = np.array(list(map(idx_map.get, edges_unordered.flatten())), dtype=np.int32).reshape(edges_unordered.shape) adj = sp.coo_matrix((np.ones(edges.shape[0]), (edges[:, 0], edges[:, 1])), shape=(labels.shape[0], labels.shape[0]), dtype=np.float32) features = normalize_features(features) adj = normalize_adj(adj) idx_train = range(140) idx_val = range(200, 500) idx_test = range(500, 1500) features = torch.FloatTensor(np.array(features)) labels = torch.LongTensor(np.where(labels)[1]) num_nodes = features.shape[0] train_mask = np.zeros(num_nodes, dtype=np.bool) val_mask = np.zeros(num_nodes, dtype=np.bool) test_mask = np.zeros(num_nodes, dtype=np.bool) train_mask[idx_train] = True val_mask[idx_val] = True test_mask[idx_test] = True return adj, features, labels, train_mask, val_mask, test_mask
構建網絡:
import numpy as npimport scipy.sparse as spimport torchimport torch.nn as nnimport torch.nn.functional as Fimport torch.nn.init as initimport torch.optim as optimimport matplotlib.pyplot as pltfrom load_cora import *import syssys.path.append("/content/drive/My Drive/nlpdata/cora/")class GraphConvolution(nn.Module): def __init__(self, input_dim, output_dim, use_bias=True): """圖卷積:L*X*\theta Args: ---------- input_dim: int 節點輸入特征的維度 output_dim: int 輸出特征維度 use_bias : bool, optional 是否使用偏置 """ super(GraphConvolution, self).__init__() self.input_dim = input_dim self.output_dim = output_dim self.use_bias = use_bias self.weight = nn.Parameter(torch.Tensor(input_dim, output_dim)) if self.use_bias: self.bias = nn.Parameter(torch.Tensor(output_dim)) else: self.register_parameter('bias', None) self.reset_parameters() def reset_parameters(self): init.kaiming_uniform_(self.weight) if self.use_bias: init.zeros_(self.bias) def forward(self, adjacency, input_feature): """鄰接矩陣是稀疏矩陣,因此在計算時使用稀疏矩陣乘法 Args: ------- adjacency: torch.sparse.FloatTensor 鄰接矩陣 input_feature: torch.Tensor 輸入特征 """ device = "cuda" if torch.cuda.is_available() else "cpu" support = torch.mm(input_feature, self.weight.to(device)) output = torch.sparse.mm(adjacency, support) if self.use_bias: output += self.bias.to(device) return output def __repr__(self): return self.__class__.__name__ + ' (' + str(self.in_features) + ' -> ' + str(self.out_features) + ')'# ## 模型定義class GcnNet(nn.Module): """ 定義一個包含兩層GraphConvolution的模型 """ def __init__(self, input_dim=1433): super(GcnNet, self).__init__() self.gcn1 = GraphConvolution(input_dim, 16) self.gcn2 = GraphConvolution(16, 7) def forward(self, adjacency, feature): h = F.relu(self.gcn1(adjacency, feature)) logits = self.gcn2(adjacency, h) return logits
進行訓練和測試:
# ## 模型訓練# 超參數定義learning_rate = 0.1weight_decay = 5e-4epochs = 200# 模型定義:Model, Loss, Optimizerdevice = "cuda" if torch.cuda.is_available() else "cpu"model = GcnNet().to(device)criterion = nn.CrossEntropyLoss().to(device)optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)adjacency, features, labels, train_mask, val_mask, test_mask= load_data()tensor_x = features.to(device)tensor_y = labels.to(device)tensor_train_mask = torch.from_numpy(train_mask).to(device)tensor_val_mask = torch.from_numpy(val_mask).to(device)tensor_test_mask = torch.from_numpy(test_mask).to(device)indices = torch.from_numpy(np.asarray([adjacency.row, adjacency.col]).astype('int64')).long()values = torch.from_numpy(adjacency.data.astype(np.float32))tensor_adjacency = torch.sparse.FloatTensor(indices, values, (2708, 2708)).to(device)# 訓練主體函數def train(): loss_history = [] val_acc_history = [] model.train() train_y = tensor_y[tensor_train_mask] for epoch in range(epochs): logits = model(tensor_adjacency, tensor_x) # 前向傳播 train_mask_logits = logits[tensor_train_mask] # 只選擇訓練節點進行監督 loss = criterion(train_mask_logits, train_y) # 計算損失值 optimizer.zero_grad() loss.backward() # 反向傳播計算參數的梯度 optimizer.step() # 使用優化方法進行梯度更新 train_acc, _, _ = test(tensor_train_mask) # 計算當前模型訓練集上的準確率 val_acc, _, _ = test(tensor_val_mask) # 計算當前模型在驗證集上的準確率 # 記錄訓練過程中損失值和準確率的變化,用于畫圖 loss_history.append(loss.item()) val_acc_history.append(val_acc.item()) print("Epoch {:03d}: Loss {:.4f}, TrainAcc {:.4}, ValAcc {:.4f}".format( epoch, loss.item(), train_acc.item(), val_acc.item())) return loss_history, val_acc_history# 測試函數def test(mask): model.eval() with torch.no_grad(): logits = model(tensor_adjacency, tensor_x) test_mask_logits = logits[mask] predict_y = test_mask_logits.max(1)[1] accuarcy = torch.eq(predict_y, tensor_y[mask]).float().mean() return accuarcy, test_mask_logits.cpu().numpy(), tensor_y[mask].cpu().numpy()if __name__ == "__main__": train() test_accuracy, _, _ = test(tensor_test_mask) print("測試準確率是:{:.4f}".format(test_accuracy))
結果:
Loading cora dataset...Epoch 000: Loss 1.9681, TrainAcc 0.3286, ValAcc 0.3467Epoch 001: Loss 1.7307, TrainAcc 0.4786, ValAcc 0.4033Epoch 002: Loss 1.5521, TrainAcc 0.5214, ValAcc 0.4033Epoch 003: Loss 1.3685, TrainAcc 0.6143, ValAcc 0.5100Epoch 004: Loss 1.1594, TrainAcc 0.75, ValAcc 0.5767Epoch 005: Loss 0.9785, TrainAcc 0.7857, ValAcc 0.5900Epoch 006: Loss 0.8226, TrainAcc 0.8286, ValAcc 0.5867Epoch 007: Loss 0.6849, TrainAcc 0.8929, ValAcc 0.6200Epoch 008: Loss 0.5448, TrainAcc 0.9429, ValAcc 0.6433Epoch 009: Loss 0.4152, TrainAcc 0.9429, ValAcc 0.6667Epoch 010: Loss 0.3221, TrainAcc 0.9857, ValAcc 0.6767Epoch 011: Loss 0.2547, TrainAcc 1.0, ValAcc 0.7033Epoch 012: Loss 0.1979, TrainAcc 1.0, ValAcc 0.7167Epoch 013: Loss 0.1536, TrainAcc 1.0, ValAcc 0.7000Epoch 014: Loss 0.1276, TrainAcc 1.0, ValAcc 0.6700Epoch 015: Loss 0.1122, TrainAcc 1.0, ValAcc 0.6867Epoch 016: Loss 0.0979, TrainAcc 1.0, ValAcc 0.6800Epoch 017: Loss 0.0876, TrainAcc 1.0, ValAcc 0.6700Epoch 018: Loss 0.0821, TrainAcc 1.0, ValAcc 0.6667Epoch 019: Loss 0.0799, TrainAcc 1.0, ValAcc 0.6800Epoch 020: Loss 0.0804, TrainAcc 1.0, ValAcc 0.6933Epoch 021: Loss 0.0852, TrainAcc 1.0, ValAcc 0.6833Epoch 022: Loss 0.0904, TrainAcc 1.0, ValAcc 0.6700Epoch 023: Loss 0.0914, TrainAcc 1.0, ValAcc 0.6700Epoch 024: Loss 0.0926, TrainAcc 1.0, ValAcc 0.6400Epoch 025: Loss 0.0953, TrainAcc 1.0, ValAcc 0.6300Epoch 026: Loss 0.0931, TrainAcc 1.0, ValAcc 0.6467Epoch 027: Loss 0.0880, TrainAcc 1.0, ValAcc 0.6600Epoch 028: Loss 0.0851, TrainAcc 1.0, ValAcc 0.6567Epoch 029: Loss 0.0814, TrainAcc 1.0, ValAcc 0.6600Epoch 030: Loss 0.0756, TrainAcc 1.0, ValAcc 0.6433Epoch 031: Loss 0.0709, TrainAcc 1.0, ValAcc 0.6567Epoch 032: Loss 0.0682, TrainAcc 1.0, ValAcc 0.6467Epoch 033: Loss 0.0656, TrainAcc 1.0, ValAcc 0.6700Epoch 034: Loss 0.0628, TrainAcc 1.0, ValAcc 0.6633Epoch 035: Loss 0.0618, TrainAcc 1.0, ValAcc 0.6833Epoch 036: Loss 0.0617, TrainAcc 1.0, ValAcc 0.6700Epoch 037: Loss 0.0615, TrainAcc 1.0, ValAcc 0.6667Epoch 038: Loss 0.0615, TrainAcc 1.0, ValAcc 0.6600Epoch 039: Loss 0.0616, TrainAcc 1.0, ValAcc 0.6733Epoch 040: Loss 0.0613, TrainAcc 1.0, ValAcc 0.6800Epoch 041: Loss 0.0610, TrainAcc 1.0, ValAcc 0.6867Epoch 042: Loss 0.0603, TrainAcc 1.0, ValAcc 0.6800Epoch 043: Loss 0.0598, TrainAcc 1.0, ValAcc 0.6667Epoch 044: Loss 0.0592, TrainAcc 1.0, ValAcc 0.6833Epoch 045: Loss 0.0584, TrainAcc 1.0, ValAcc 0.6833Epoch 046: Loss 0.0575, TrainAcc 1.0, ValAcc 0.6967Epoch 047: Loss 0.0570, TrainAcc 1.0, ValAcc 0.6900Epoch 048: Loss 0.0565, TrainAcc 1.0, ValAcc 0.6933Epoch 049: Loss 0.0560, TrainAcc 1.0, ValAcc 0.6867Epoch 050: Loss 0.0559, TrainAcc 1.0, ValAcc 0.6900Epoch 051: Loss 0.0557, TrainAcc 1.0, ValAcc 0.6900Epoch 052: Loss 0.0555, TrainAcc 1.0, ValAcc 0.6967Epoch 053: Loss 0.0554, TrainAcc 1.0, ValAcc 0.6867Epoch 054: Loss 0.0552, TrainAcc 1.0, ValAcc 0.6867Epoch 055: Loss 0.0550, TrainAcc 1.0, ValAcc 0.6933Epoch 056: Loss 0.0549, TrainAcc 1.0, ValAcc 0.7000Epoch 057: Loss 0.0548, TrainAcc 1.0, ValAcc 0.7000Epoch 058: Loss 0.0547, TrainAcc 1.0, ValAcc 0.7067Epoch 059: Loss 0.0546, TrainAcc 1.0, ValAcc 0.7000Epoch 060: Loss 0.0545, TrainAcc 1.0, ValAcc 0.6967Epoch 061: Loss 0.0545, TrainAcc 1.0, ValAcc 0.6967Epoch 062: Loss 0.0544, TrainAcc 1.0, ValAcc 0.7067Epoch 063: Loss 0.0544, TrainAcc 1.0, ValAcc 0.7067Epoch 064: Loss 0.0543, TrainAcc 1.0, ValAcc 0.7033Epoch 065: Loss 0.0542, TrainAcc 1.0, ValAcc 0.7000Epoch 066: Loss 0.0542, TrainAcc 1.0, ValAcc 0.7000Epoch 067: Loss 0.0542, TrainAcc 1.0, ValAcc 0.7033Epoch 068: Loss 0.0542, TrainAcc 1.0, ValAcc 0.7067Epoch 069: Loss 0.0542, TrainAcc 1.0, ValAcc 0.7033Epoch 070: Loss 0.0543, TrainAcc 1.0, ValAcc 0.7033Epoch 071: Loss 0.0543, TrainAcc 1.0, ValAcc 0.7000Epoch 072: Loss 0.0543, TrainAcc 1.0, ValAcc 0.7033Epoch 073: Loss 0.0544, TrainAcc 1.0, ValAcc 0.7067Epoch 074: Loss 0.0544, TrainAcc 1.0, ValAcc 0.7067Epoch 075: Loss 0.0545, TrainAcc 1.0, ValAcc 0.7133Epoch 076: Loss 0.0545, TrainAcc 1.0, ValAcc 0.7100Epoch 077: Loss 0.0546, TrainAcc 1.0, ValAcc 0.7133Epoch 078: Loss 0.0546, TrainAcc 1.0, ValAcc 0.7067Epoch 079: Loss 0.0547, TrainAcc 1.0, ValAcc 0.7133Epoch 080: Loss 0.0547, TrainAcc 1.0, ValAcc 0.7033Epoch 081: Loss 0.0548, TrainAcc 1.0, ValAcc 0.7100Epoch 082: Loss 0.0549, TrainAcc 1.0, ValAcc 0.7067Epoch 083: Loss 0.0549, TrainAcc 1.0, ValAcc 0.7133Epoch 084: Loss 0.0549, TrainAcc 1.0, ValAcc 0.7067Epoch 085: Loss 0.0550, TrainAcc 1.0, ValAcc 0.7100Epoch 086: Loss 0.0550, TrainAcc 1.0, ValAcc 0.7033Epoch 087: Loss 0.0551, TrainAcc 1.0, ValAcc 0.7133Epoch 088: Loss 0.0551, TrainAcc 1.0, ValAcc 0.7067Epoch 089: Loss 0.0552, TrainAcc 1.0, ValAcc 0.7100Epoch 090: Loss 0.0553, TrainAcc 1.0, ValAcc 0.6967Epoch 091: Loss 0.0553, TrainAcc 1.0, ValAcc 0.7067Epoch 092: Loss 0.0554, TrainAcc 1.0, ValAcc 0.6900Epoch 093: Loss 0.0556, TrainAcc 1.0, ValAcc 0.7100Epoch 094: Loss 0.0557, TrainAcc 1.0, ValAcc 0.6833Epoch 095: Loss 0.0561, TrainAcc 1.0, ValAcc 0.7033Epoch 096: Loss 0.0558, TrainAcc 1.0, ValAcc 0.6833Epoch 097: Loss 0.0557, TrainAcc 1.0, ValAcc 0.7100Epoch 098: Loss 0.0547, TrainAcc 1.0, ValAcc 0.7133Epoch 099: Loss 0.0546, TrainAcc 1.0, ValAcc 0.6900Epoch 100: Loss 0.0555, TrainAcc 1.0, ValAcc 0.7033Epoch 101: Loss 0.0561, TrainAcc 1.0, ValAcc 0.6700Epoch 102: Loss 0.0579, TrainAcc 1.0, ValAcc 0.6967Epoch 103: Loss 0.0577, TrainAcc 1.0, ValAcc 0.6633Epoch 104: Loss 0.0600, TrainAcc 1.0, ValAcc 0.7000Epoch 105: Loss 0.0550, TrainAcc 1.0, ValAcc 0.6967Epoch 106: Loss 0.0540, TrainAcc 1.0, ValAcc 0.6767Epoch 107: Loss 0.0555, TrainAcc 1.0, ValAcc 0.6967Epoch 108: Loss 0.0528, TrainAcc 1.0, ValAcc 0.7000Epoch 109: Loss 0.0571, TrainAcc 1.0, ValAcc 0.6700Epoch 110: Loss 0.0643, TrainAcc 1.0, ValAcc 0.6933Epoch 111: Loss 0.0583, TrainAcc 1.0, ValAcc 0.6800Epoch 112: Loss 0.0533, TrainAcc 1.0, ValAcc 0.6700Epoch 113: Loss 0.0552, TrainAcc 1.0, ValAcc 0.7067Epoch 114: Loss 0.0534, TrainAcc 1.0, ValAcc 0.6967Epoch 115: Loss 0.0555, TrainAcc 1.0, ValAcc 0.6833Epoch 116: Loss 0.0555, TrainAcc 1.0, ValAcc 0.6800Epoch 117: Loss 0.0559, TrainAcc 1.0, ValAcc 0.6933Epoch 118: Loss 0.0601, TrainAcc 1.0, ValAcc 0.6700Epoch 119: Loss 0.0707, TrainAcc 1.0, ValAcc 0.6667Epoch 120: Loss 0.0670, TrainAcc 1.0, ValAcc 0.6500Epoch 121: Loss 0.0574, TrainAcc 1.0, ValAcc 0.6467Epoch 122: Loss 0.0589, TrainAcc 1.0, ValAcc 0.7033Epoch 123: Loss 0.0493, TrainAcc 1.0, ValAcc 0.6800Epoch 124: Loss 0.0591, TrainAcc 1.0, ValAcc 0.6900Epoch 125: Loss 0.0482, TrainAcc 1.0, ValAcc 0.6600Epoch 126: Loss 0.0562, TrainAcc 1.0, ValAcc 0.6667Epoch 127: Loss 0.0538, TrainAcc 1.0, ValAcc 0.6900Epoch 128: Loss 0.0579, TrainAcc 1.0, ValAcc 0.6867Epoch 129: Loss 0.0557, TrainAcc 1.0, ValAcc 0.6833Epoch 130: Loss 0.0615, TrainAcc 1.0, ValAcc 0.6733Epoch 131: Loss 0.0570, TrainAcc 1.0, ValAcc 0.6667Epoch 132: Loss 0.0612, TrainAcc 1.0, ValAcc 0.6700Epoch 133: Loss 0.0669, TrainAcc 1.0, ValAcc 0.6967Epoch 134: Loss 0.0544, TrainAcc 1.0, ValAcc 0.6767Epoch 135: Loss 0.0605, TrainAcc 1.0, ValAcc 0.6567Epoch 136: Loss 0.0546, TrainAcc 1.0, ValAcc 0.6567Epoch 137: Loss 0.0586, TrainAcc 1.0, ValAcc 0.7033Epoch 138: Loss 0.0501, TrainAcc 1.0, ValAcc 0.6833Epoch 139: Loss 0.0600, TrainAcc 1.0, ValAcc 0.7067Epoch 140: Loss 0.0513, TrainAcc 1.0, ValAcc 0.6633Epoch 141: Loss 0.0587, TrainAcc 1.0, ValAcc 0.6733Epoch 142: Loss 0.0556, TrainAcc 1.0, ValAcc 0.6833Epoch 143: Loss 0.0586, TrainAcc 1.0, ValAcc 0.6967Epoch 144: Loss 0.0565, TrainAcc 1.0, ValAcc 0.6900Epoch 145: Loss 0.0586, TrainAcc 1.0, ValAcc 0.6833Epoch 146: Loss 0.0559, TrainAcc 1.0, ValAcc 0.6767Epoch 147: Loss 0.0589, TrainAcc 1.0, ValAcc 0.6800Epoch 148: Loss 0.0562, TrainAcc 1.0, ValAcc 0.6900Epoch 149: Loss 0.0560, TrainAcc 1.0, ValAcc 0.6933Epoch 150: Loss 0.0565, TrainAcc 1.0, ValAcc 0.6833Epoch 151: Loss 0.0547, TrainAcc 1.0, ValAcc 0.6767Epoch 152: Loss 0.0559, TrainAcc 1.0, ValAcc 0.6967Epoch 153: Loss 0.0549, TrainAcc 1.0, ValAcc 0.6933Epoch 154: Loss 0.0567, TrainAcc 1.0, ValAcc 0.7000Epoch 155: Loss 0.0556, TrainAcc 1.0, ValAcc 0.6867Epoch 156: Loss 0.0568, TrainAcc 1.0, ValAcc 0.6967Epoch 157: Loss 0.0558, TrainAcc 1.0, ValAcc 0.6967Epoch 158: Loss 0.0568, TrainAcc 1.0, ValAcc 0.6867Epoch 159: Loss 0.0560, TrainAcc 1.0, ValAcc 0.6867Epoch 160: Loss 0.0563, TrainAcc 1.0, ValAcc 0.6933Epoch 161: Loss 0.0562, TrainAcc 1.0, ValAcc 0.6967Epoch 162: Loss 0.0559, TrainAcc 1.0, ValAcc 0.6833Epoch 163: Loss 0.0556, TrainAcc 1.0, ValAcc 0.7000Epoch 164: Loss 0.0554, TrainAcc 1.0, ValAcc 0.7033Epoch 165: Loss 0.0556, TrainAcc 1.0, ValAcc 0.6967Epoch 166: Loss 0.0553, TrainAcc 1.0, ValAcc 0.6900Epoch 167: Loss 0.0555, TrainAcc 1.0, ValAcc 0.7033Epoch 168: Loss 0.0554, TrainAcc 1.0, ValAcc 0.6967Epoch 169: Loss 0.0560, TrainAcc 1.0, ValAcc 0.6833Epoch 170: Loss 0.0558, TrainAcc 1.0, ValAcc 0.6900Epoch 171: Loss 0.0561, TrainAcc 1.0, ValAcc 0.7033Epoch 172: Loss 0.0560, TrainAcc 1.0, ValAcc 0.6967Epoch 173: Loss 0.0559, TrainAcc 1.0, ValAcc 0.6833Epoch 174: Loss 0.0557, TrainAcc 1.0, ValAcc 0.6967Epoch 175: Loss 0.0554, TrainAcc 1.0, ValAcc 0.7033Epoch 176: Loss 0.0554, TrainAcc 1.0, ValAcc 0.7033Epoch 177: Loss 0.0551, TrainAcc 1.0, ValAcc 0.6933Epoch 178: Loss 0.0553, TrainAcc 1.0, ValAcc 0.6967Epoch 179: Loss 0.0553, TrainAcc 1.0, ValAcc 0.7000Epoch 180: Loss 0.0555, TrainAcc 1.0, ValAcc 0.6900Epoch 181: Loss 0.0556, TrainAcc 1.0, ValAcc 0.7000Epoch 182: Loss 0.0557, TrainAcc 1.0, ValAcc 0.7033Epoch 183: Loss 0.0558, TrainAcc 1.0, ValAcc 0.6933Epoch 184: Loss 0.0556, TrainAcc 1.0, ValAcc 0.6867Epoch 185: Loss 0.0555, TrainAcc 1.0, ValAcc 0.7033Epoch 186: Loss 0.0554, TrainAcc 1.0, ValAcc 0.7000Epoch 187: Loss 0.0552, TrainAcc 1.0, ValAcc 0.6933Epoch 188: Loss 0.0552, TrainAcc 1.0, ValAcc 0.6933Epoch 189: Loss 0.0553, TrainAcc 1.0, ValAcc 0.7000Epoch 190: Loss 0.0554, TrainAcc 1.0, ValAcc 0.7000Epoch 191: Loss 0.0554, TrainAcc 1.0, ValAcc 0.6933Epoch 192: Loss 0.0555, TrainAcc 1.0, ValAcc 0.7000Epoch 193: Loss 0.0555, TrainAcc 1.0, ValAcc 0.7000Epoch 194: Loss 0.0555, TrainAcc 1.0, ValAcc 0.6933Epoch 195: Loss 0.0554, TrainAcc 1.0, ValAcc 0.6933Epoch 196: Loss 0.0553, TrainAcc 1.0, ValAcc 0.7000Epoch 197: Loss 0.0553, TrainAcc 1.0, ValAcc 0.7033Epoch 198: Loss 0.0553, TrainAcc 1.0, ValAcc 0.6933Epoch 199: Loss 0.0553, TrainAcc 1.0, ValAcc 0.7033測試準確率是:0.6480
參考:
https://blog.csdn.net/weixin_39373480/article/details/88742200
【深入淺出圖神經網絡】
本文由 貴州做網站公司 整理發布,部分圖文來源于互聯網,如有侵權,請聯系我們刪除,謝謝!
網絡推廣與網站優化公司(網絡優化與推廣專家)作為數字營銷領域的核心服務提供方,其價值在于通過技術手段與策略規劃幫助企業提升線上曝光度、用戶轉化率及品牌影響力。這...
在當今數字化時代,公司網站已成為企業展示形象、傳遞信息和開展業務的重要平臺。然而,對于許多公司來說,網站建設的價格是一個關鍵考量因素。本文將圍繞“公司網站建設價...
在當今的數字化時代,企業網站已成為企業展示形象、吸引客戶和開展業務的重要平臺。然而,對于許多中小企業來說,高昂的網站建設費用可能會成為其發展的瓶頸。幸運的是,隨...
寶馬自行車標志是什么樣子?寶馬自行車標志和汽車標志相似。這是寶馬制造的高端自行車。但目前在國內很少見到。寶馬自行車的車架造型源于寶馬汽車制造技術的車架材料處理和成型工藝,保證了車輛的整體剛性和韌性。這款自行車結合了寶馬汽車的設計理念和寶馬高超的材料制造加工技術,成為了一件工業藝術品。寶馬自行車為什么那么便宜?寶馬自行車一般不在實體店賣,大部分在網店賣。網店在銷售成本上有很大優勢,在價格上也比其他品...
北京拉雙眼皮哪個醫院好?北京最好的整形醫院是八大處整形醫院?,F在它的名字叫醫學科學院整形外科醫院。我們北京人還是喜歡叫它八大處整形醫院。他是北京最專業的整形醫院,也是全國唯一的三甲整形醫院。北京做雙眼皮最好的醫院是哪個?巴楚整形醫院推薦醫生:樂和楊明勇。綜合評價:八大處名氣就不用說了,有很多素質高的醫生。今天主要說一下樂醫生,擅長雙眼皮,開眼角,雙眼皮修復失敗。外科朋友的反饋很好。8000多臺手術...
聯想y470怎么切換顯卡?有兩種切換方式,一種是手動切換,即Y470底部有一個銀色開關。還有一種方法是在桌面上點擊右鍵,選擇配置可交換顯卡,直接設置。您可以手動或自動更改圖形卡的用途。建議自動切換。用ATI卡是高性能,用Inter是低性能。聯想Y470筆記本怎樣切換雙顯卡?聯想Y470筆記本集成顯卡轉獨立顯卡的方法:1.聯想Y470自帶物理開關,帶獨立顯卡,是銀白色開關。第一次撥的時候旁邊的燈會亮...