-
-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sum
with CUDA and view on array errors
#1498
Comments
That's because there are rules for Zygote.jl/src/lib/broadcast.jl Lines 374 to 384 in c0daccd
Somebody would have to figure out how to make those work for SubArrays. Alternatively, you could try splitting the operations up to hit more advantageous rules: f(x, isobject) = sum(abs2, NNlib.relu.(1 .- view(x, isobject))) |
Yeah your function works but allocates another full array which I wanted to avoid. But it actually looks like that the
|
More than one full array, in fact. If the goal is to reduce allocations from unfused operations, you could keep the original code but make the |
Hi,
I wanted to calculate a loss allocation free by using a view on the array and applying the loss on it.
It's funny that even the forward pass has an allocation but the gradient with CUDA errors:
Environment:
The text was updated successfully, but these errors were encountered: