Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Voluntarism in the capture range of keys in case of a cluster separation can lead to data loss. #59

Open
GoogleCodeExporter opened this issue Apr 8, 2015 · 11 comments

Comments

@GoogleCodeExporter
Copy link

What steps will reproduce the problem?
1. Start scalaris with four nodes. An each node should to own an equal part oh 
the keyspace.
2. Suspend all nodes except the boot-node by pressing Ctrl-C in the erlang 
shell of these nodes.
3. Make the write operation for some key:
ok=cs_api_v2:write("Key", 1).
1=cs_api_v2:read("Key").
4. Resume the suspended nodes (by pressing c + [enter] on each).
5. Try to read the key value:
{fail, not_found}=cs_api_v2:read("Key").

What is the expected output? What do you see instead?
So we lost the data after the cluster recombination. You can get another 
effect, in the case when the writing of this key was made for each of the 
breakaway node by different clients. In this case, after recombination, the 
nodes can be stored keys having different values but the same version. And 
therefore in the further reading data from the key, different clients can get 
different values simultaneously.
The proof:
> cs_api_v2:range_read(0,0).
{ok,[{12561922216592930516936087995162401722,2,false,0,0},
     {182703105677062162248623391711046507450,4,false,0,0},
     {267773697407296778114467043568988560314,1,false,0,0},
     {97632513946827546382779739853104454586,3,false,0,0}]}
It is four different values for the "Key" key.

What version of the product are you using? On what operating system?
r978

Please provide any additional information below.


Original issue reported on code.google.com by serge.po...@gmail.com on 10 Aug 2010 at 3:31

@funny-falcon
Copy link

Is it fixed?

@schintke
Copy link
Member

In principal, it is fixed when you configure to use our experimental ring maintenance by setting {leases, true}. If not enough replicas are available, the write request just hangs. When you resume the held nodes, the write finishes successfully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants