You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Doing a SPARQL select query (through the REST API) with HTTP query parameter ×tamp=(the value given by readsOnCommitTime of transaction response) does give data as expected for that time (i.e. you don't see any data that was inserted after that time).
Doing a select query through the REST API with ×tamp=(txId of a RW transaction) doesn't work; it still returns data newer than that transaction's creation time. I suspect this has something to do with the txId provided being a negative integer.
Similarly, if I perform a SPARQL update to insert data through the REST API with ×tamp set to either the txId or readsOnCommitTime, that data is available immediately for a non-transactional SPARQL select query (before the transaction has been either aborted or committed). If I abort the transaction, the data remains committed.
In UpdateServlet.java, I can see references to readOnlyTimestamp and ITx.UNISOLATED... I can't see an indication that it's actually attempting to do isolated operations in a read-write transaction.
So, it looks to me as if either the (a) documentation is advertising a capability that is not actually available yet (meaning the documentation should be revised to indicate this), or (b) there's some implementation detail that is preventing RW transactions from behaving as expected (though I can't see where/what this would be).
Has functionality for performing updates inside a read-write transaction actually been implemented yet?
Or am I just using the API incorrectly? To put it another way, when the documentation says "POST /bigdata/tx => txId \n doWork(txId).... \n POST /bigdata/tx/txid?COMMIT", what is the "doWork(txId)" bit supposed to look like for sparql updates?
Any information would be appreciated! :)
The text was updated successfully, but these errors were encountered:
At https://github.com/blazegraph/database/wiki/REST_API#transaction-management-api, we have the note: "Either the txId or the readsOnCommitTime may be used for the ×tamp=... parameter on the REST API methods." However, this appears to be only partly true (I'm running 2.1.6)...
Doing a SPARQL select query (through the REST API) with HTTP query parameter ×tamp=(the value given by readsOnCommitTime of transaction response) does give data as expected for that time (i.e. you don't see any data that was inserted after that time).
Doing a select query through the REST API with ×tamp=(txId of a RW transaction) doesn't work; it still returns data newer than that transaction's creation time. I suspect this has something to do with the txId provided being a negative integer.
Similarly, if I perform a SPARQL update to insert data through the REST API with ×tamp set to either the txId or readsOnCommitTime, that data is available immediately for a non-transactional SPARQL select query (before the transaction has been either aborted or committed). If I abort the transaction, the data remains committed.
In UpdateServlet.java, I can see references to readOnlyTimestamp and ITx.UNISOLATED... I can't see an indication that it's actually attempting to do isolated operations in a read-write transaction.
So, it looks to me as if either the (a) documentation is advertising a capability that is not actually available yet (meaning the documentation should be revised to indicate this), or (b) there's some implementation detail that is preventing RW transactions from behaving as expected (though I can't see where/what this would be).
Has functionality for performing updates inside a read-write transaction actually been implemented yet?
Or am I just using the API incorrectly? To put it another way, when the documentation says "POST /bigdata/tx => txId \n doWork(txId).... \n POST /bigdata/tx/txid?COMMIT", what is the "doWork(txId)" bit supposed to look like for sparql updates?
Any information would be appreciated! :)
The text was updated successfully, but these errors were encountered: