You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 27, 2021. It is now read-only.
Now that I have a working prototype for GLSL transpilation, it'd be nice to have the same julia code compile to GLSL and CUDAnative without hassle!
Shared Memory
In GLSL it seems keywords like shared a just one keyword from a set of other keywords. So I had the idea of having to create an intrinsic type Qualified{Qualifier, Type} .
So you could create shared memory like this:
Qualified{:shared, StaticVector{10, Float32}()
I'm not sure how well this can work with CUDAnatives code generation...
intrinsics
There are a lot of shared intrinsics like memory barriers, work group index getters etc.
The problem with them is, that we'd need to dispatch on some backend type to allow to select the correct intrinsic name for the backend.
I could in theory just mirror the cuda names, since I go through the Julia code anyways and can just replace them with the correct names for GLSL.
Any thoughts on this?