-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add share Info #668
Add share Info #668
Conversation
Yes, we should be doing that. And with the Parser, we could let users provide an Info.annotations(...) on the constructor I guess. |
I can try something in this direction. Also I'm wondering about the relevance of generating the |
I'm sure it could be done somehow, but those are just hacks because the C++ compiler isn't able to resolve ambiguities. We'd need adapters with more explicit forms of type conversions as per issue #374 (comment). /cc @equeim |
For handling template<class T> class SharedPtrAdapter {
public:
typedef SHARED_PTR_NAMESPACE::shared_ptr<T> S;
typedef SHARED_PTR_NAMESPACE::shared_ptr<const T> SC;
SharedPtrAdapter(const T* ptr, size_t size, void* owner) : ptr((T*)ptr), size(size), owner(owner),
sharedPtr2(owner != NULL && owner != ptr ? *(S*)owner : S((T*)ptr)), sharedPtr(sharedPtr2) { }
SharedPtrAdapter(const S& sharedPtr) : ptr(0), size(0), owner(0), sharedPtr2(sharedPtr), sharedPtr(sharedPtr2) { }
SharedPtrAdapter( S& sharedPtr) : ptr(0), size(0), owner(0), sharedPtr(sharedPtr) { }
SharedPtrAdapter(const SC& sharedPtr) : ptr(0), size(0), owner(0), sharedPtr2(std::const_pointer_cast<T>(sharedPtr)), sharedPtr(sharedPtr2) { }
SharedPtrAdapter(const S* sharedPtr) : ptr(0), size(0), owner(0), sharedPtr(*(S*)sharedPtr) { } For functions taking either Am I wrong ? |
Should like something that could work... |
Could you review that ? We must also find a way to have explicit cast |
out.println(" SharedPtrAdapter( S& sharedPtr) : ptr(0), size(0), owner(0), sharedPtr(sharedPtr) { }"); | ||
out.println(" SharedPtrAdapter(const S* sharedPtr) : ptr(0), size(0), owner(0), sharedPtr(*(S*)sharedPtr) { }"); | ||
out.println(" void assign(T* ptr, size_t size, void* owner) {"); | ||
out.println(" typedef typename SHARED_PTR_NAMESPACE::remove_const<T>::type TU;"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either these remove_const
, add_const
, or we must ensure in adapterInformation
that valueTypeName
has no const (for all adapters ??).
Well, did you test it with all presets that use |
@@ -1006,7 +1023,8 @@ Type type(Context context, boolean definition) throws ParserException { | |||
} | |||
if (info != null && info.annotations != null) { | |||
for (String s : info.annotations) { | |||
type.annotations += s + " "; | |||
if (!type.annotations.contains(s)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there annotations which could legitimately be duplicated ?
Not yet. I did for Pytorch. I'd like to know if you see something wrong first. |
Right, so what about moving the development of that to the PyTorch presets? We can add adapters there like this: |
My roadmap for pytorch is to propose another PR (yes, again, but it should be the last one :) ) about explicit upcasts (dealing with virtual inheritance), once this one is merged. Then I'll have a Pytorch presets ready with all improvements. It already works well with this PR and the next. I'm currently training models with them. For this PR, if you're afraid of the changes in the |
We don't need to make any modifications to the core of JavaCPP to do most of what you're proposing for that too. There's a lot we can move to the presets for PyTorch. Why do you want to modify everything here? |
Don't you think the ability to create shared objects from Java is a good thing that can benefit to other presets ? Or are you talking about something else ? |
Like I said, we can already provide functions like that without modifying anything from the core of JavaCPP. It would be nice if the Generator could pick up annotations on allocate() and call a predetermined static function from the adapter, but you're not doing this here, and the rest of what you're doing can be done without modifying the core of JavaCPP. |
Did you have a look at the changes ? It does exactly that : when |
Here is an example of what an allocate with @sharedptr looks like: JNIEXPORT void JNICALL Java_shared_M_allocate__(JNIEnv* env, jobject obj) {
jthrowable exc = NULL;
try {
SharedPtrAdapter< ns::M > radapter(new ns::M(), 1, NULL);
ns::M* rptr = radapter;
jlong rcapacity = (jlong)radapter.size;
void* rowner = radapter.owner;
void (*deallocator)(void*) = rowner != NULL ? &SharedPtrAdapter< ns::M >::deallocate : 0;
JavaCPP_initPointer(env, obj, rptr, rcapacity, rowner, deallocator);
} catch (...) {
exc = JavaCPP_handleException(env, 8);
}
if (exc != NULL) {
env->Throw(exc);
}
} |
I see, we don't need to change SharedPtrAdapter to wrap on allocate(). So let's start by merging in that modification only. |
How do you add an annotation on The PR doesn't do much more than that. I'll move out the modification of the adapter about const for now. There is also a modification of Now, as said above, I did this as a proof of concept for shared pointer but it needs to be generalized.
SInce I haven't spent time on understanding the roles of other adapters yet and hwo they work, I have no opinion about this. |
We probably don't need to add a new field to Info for any of that. Using the Info.annotations given for constructors should be sufficient. As for Type.shared and shared_ptr fields, from what I can see, that's only useful to add the annotation hacks for PyTorch that are very specific to how it makes use of std::shared_ptr, so like I keep telling you, that belongs in the presets of PyTorch, unless we figure out some way to generalize that, yes, sure, but instead of trying to do everything in this single PR, let's first merge things that clearly make sense. |
Sorry but my experience with presets are limited. What exactly is special in how Pytorch uses std::shared_ptr ? I'm just realizing that it can just work without any info, I thought that the pointerTypes and annotations(@cast...@sharedptr) was mandatory for the merge of native class X and shared_ptr into a single java class X could work. |
shared_ptr<C>
are handled by specifying this info:which results in the ability to use Java class
C
indifferently to represent a nativeC
or ashared_ptr<C>
.This works well, but this is incomplete, because if a
c = new C()
is instantiated, and then passed to a native function taking ashared_ptr<C>
, and then the instance is deallocated, either explicitly withc.deallocate()
, or by a PointerScope, or by the GC, the native object will be freed, causing a SIGSEGV if the native library tries to accessc
. Thus ruining the point of using shared pointers. Currently we can add this Info on C constructor :to avoid the SIGSEGV, but then instances of C will never be freed. We need to be able to call
make_shared
from Java-side.This PR adds a new Info:
that:
to:
What do you think of this solution ?
An alternative, a bit more efficient, to makeShared, is to add a
@SharedPtr
annotation toallocate
, and have the Generator output an alternate version ofallocate
, but that would need to patch Generator and its magic is a bit to dark for me.