本篇內(nèi)容介紹了“C語言擴展怎么實現(xiàn)”的有關(guān)知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領(lǐng)大家學(xué)習(xí)一下如何處理這些情況吧!希望大家仔細閱讀,能夠?qū)W有所成!
北林網(wǎng)站制作公司哪家好,找創(chuàng)新互聯(lián)建站!從網(wǎng)頁設(shè)計、網(wǎng)站建設(shè)、微信開發(fā)、APP開發(fā)、響應(yīng)式網(wǎng)站建設(shè)等網(wǎng)站項目制作,到程序開發(fā),運營維護。創(chuàng)新互聯(lián)建站2013年至今到現(xiàn)在10年的時間,我們擁有了豐富的建站經(jīng)驗和運維經(jīng)驗,來保證我們的工作的順利進行。專注于網(wǎng)站建設(shè)就選創(chuàng)新互聯(lián)建站。
Extending torch.autograd 擴展torch.autograd Adding operations to autograd requires implementing a new Function subclass for each operation. Recall that Function s are what autograd uses to compute the results and gradients, and encode the operation history. Every new function requires you to implement 2 methods: 向autograd自動梯度中添加操作需要我們?yōu)槊總€操作實現(xiàn)一個新的Function子類. 我們知道,autograd使用Function來計算結(jié)果和計算梯度,并且編碼操作歷史. 每一個新的function需要我們實現(xiàn)兩個方法: forward() - the code that performs the operation. It can take as many arguments as you want, with some of them being optional, if you specify the default values. All kinds of Python objects are accepted here. Tensor arguments that track history (i.e., with requires_grad=True) will be converted to ones that don’t track history before the call, and their use will be registered in the graph. Note that this logic won’t traverse lists/dicts/any other data structures and will only consider Tensor s that are direct arguments to the call. You can return either a single Tensor output, or a tuple of Tensor s if there are multiple outputs. Also, please refer to the docs of Function to find descriptions of useful methods that can be called only from forward(). forward()方法 - 執(zhí)行操作的代碼.它可以接收你需要的任意數(shù)量的參數(shù),如果你指定默認 值,可以將其中部分設(shè)置成可選參數(shù).這里可以接收任意Python對象. 跟蹤歷史的(即requires_grad=True) 張量Tensor參數(shù)在該函數(shù)調(diào)用之前將會被轉(zhuǎn)換成 不跟蹤歷史的張量,并且他們的使用會被登記注冊到計算圖中.注意這個邏輯不會遍歷 列表/字典/以及其他任何數(shù)據(jù)結(jié)構(gòu),并且只會作用于作為參數(shù)直接傳遞給該函數(shù)調(diào)用的 張量. 你可以返回單個張量Tensor作為函數(shù)輸出,或者返回張量構(gòu)成的元組作為函數(shù)的 多個輸出.同時你可以參閱Function文檔, 在文檔中你可以找到更多信息,它們介紹了一 些只能在forward()函數(shù)中使用的好用的方法. backward() - gradient formula. It will be given as many Tensor arguments as there were outputs, with each of them representing gradient w.r.t. that output. It should return as many Tensor s as there were inputs, with each of them containing the gradient w.r.t. its corresponding input. If your inputs didn’t require gradient (needs_input_grad is a tuple of booleans indicating whether each input needs gradient computation), or were non-Tensor objects, you can return None. Also, if you have optional arguments to forward() you can return more gradients than there were inputs, as long as they’re all None. backward() - 梯度公式. 這個方法接收一定數(shù)量的Tensor張量參數(shù),參數(shù)的數(shù)量,就是這 個運算操作的輸出數(shù)據(jù)數(shù)量(即前向傳遞函數(shù)輸出數(shù)據(jù)的數(shù)量),并且這個函數(shù)接收的參 數(shù)就是相對于輸出數(shù)據(jù)(前向傳遞的輸出數(shù)據(jù))的梯度. 該方法也返回一定數(shù)量的 Tensor張量參數(shù),參數(shù)的數(shù)量就是輸入數(shù)據(jù)(前向傳遞的輸入數(shù)據(jù),也就是forward函數(shù)接 收參數(shù)的數(shù)量)的數(shù)量,并且它的值是相對于輸入數(shù)據(jù)的梯度.如果你的數(shù)據(jù)不需要梯度 (needs_input_grad是一個布爾類型構(gòu)成的元組,他表示輸入的每個數(shù)據(jù)是否需要計算梯 度), 或者是非張量的對象,你可以返回None. 同樣,如果有可選參數(shù)傳遞到forward(),那 么你可以返回比輸入數(shù)據(jù)更多數(shù)量的梯度,只要把他們設(shè)置成None即可. Below you can find code for a Linear function from torch.nn, with additional comments: 以下內(nèi)容你可以看到torch.nn庫中Linear 函數(shù)的代碼:
# Inherit from Function# 繼承Functionclass LinearFunction(Function):# Note that both forward and backward are @staticmethods# 注意forward方法和backward方法都需要用@staticmethod來裝飾@staticmethod# bias is an optional argument# bias 是可選參數(shù)def forward(ctx, input, weight, bias=None):ctx.save_for_backward(input, weight, bias)output = input.mm(weight.t())if bias is not None:output += bias.unsqueeze(0).expand_as(output)return output# This function has only a single output, so it gets only one gradient# 該函數(shù)只有單個輸出,因此他只會接收一個梯度@staticmethoddef backward(ctx, grad_output):# This is a pattern that is very convenient - at the top of backward# unpack saved_tensors and initialize all gradients w.r.t. inputs to# None. Thanks to the fact that additional trailing Nones are# ignored, the return statement is simple even when the function has# optional inputs.# 這是一個非常方便的模式,在backward函數(shù)開頭解包saved_tensors# 然后初始化相對于輸入的梯度,將他們設(shè)置成None# 由于尾部多余的None值會被忽略,因此盡管函數(shù)有可選參數(shù),# 返回語句依然很簡單.input, weight, bias = ctx.saved_tensors grad_input = grad_weight = grad_bias = None# These needs_input_grad checks are optional and there only to# improve efficiency. If you want to make your code simpler, you can# skip them. Returning gradients for inputs that don't require it is# not an error.# if ctx.needs_input_grad[0]:grad_input = grad_output.mm(weight)if ctx.needs_input_grad[1]:grad_weight = grad_output.t().mm(input)if bias is not None and ctx.needs_input_grad[2]:grad_bias = grad_output.sum(0).squeeze(0)return grad_input, grad_weight, grad_bias
“C語言擴展怎么實現(xiàn)”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識可以關(guān)注創(chuàng)新互聯(lián)網(wǎng)站,小編將為大家輸出更多高質(zhì)量的實用文章!
新聞名稱:C語言擴展怎么實現(xiàn)
文章位置:http://sd-ha.com/article38/gpsgpp.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供自適應(yīng)網(wǎng)站、云服務(wù)器、ChatGPT、電子商務(wù)、微信公眾號、標簽優(yōu)化
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)