amazon web services - How long does it take for AWS S3 to save and load an item? -


s3 faq mentions "amazon s3 buckets in regions provide read-after-write consistency puts of new objects , eventual consistency overwrite puts , deletes." however, don't know how long takes eventual consistency. tried search couldn't find answer in s3 documentation.

situation:

we have website consists of 7 steps. when user clicks on save in each step, want save json document (contains information of 7 steps) amazon s3. plan to:

  1. create single s3 bucket store json documents.
  2. when user saves step 1 create new item in s3.
  3. when user saves step 2-7 override existing item.
  4. after user saves step , refresh page, should able see information saved. i.e. want make sure read after write.

the full json document (all 7 steps completed) around 20 kb. after users clicked on save button can freeze page time , cannot make other changes until save finished.

question:

  1. how long take aws s3 save , load item? (we can freeze our website when document being saved s3)
  2. is there function calculate save/load time based on item size?
  3. is save/load time gonna different if choose s3 region? if best region seattle?

i wanted add @error2007s answers.

how long take aws s3 save , load item? (we can freeze our website when document being saved s3)

it's not not find exact time anywhere - there's no such thing exact time. that's "eventual consistency" about: consistency achieved eventually. can't know when.

if gave upper bound how long system take achieve consistency, wouldn't call "eventually consistent" anymore. "consistent within x amount of time".


the problem becomes, "how deal eventual consistency?" (instead of trying "beat it")

to find answer question, need first understand kind of consistency need, , how eventual consistency of s3 affect workflow.

based on description, understand write total of 7 times s3, once each step have. first write, correctly cited faqs, strong consistency reads after that. subsequent writes (which "replacing" original object), might observe eventual consistency - is, if try read overwritten object, might recent version, or might older version. referred "eventual consistency" on s3 in scenario.

a few alternatives consider:

  • don't write s3 on every single step; instead, keep data each step on client side, , write 1 single object s3 after 7th step. way, there's 1 write, no "overwrites", no "eventual consistency". might or might not possible specific scenario, need evaluate that.

  • alternatively, write s3 objects different names each step. e.g., like: after step 1, save bruno-preferences-step-1.json; then, after step 2, save results bruno-preferences-step-2.json; , on, save final preferences file bruno-preferences.json, or maybe bruno-preferences-step-7.json, giving flexibility add more steps in future. note idea here avoid overwrites, cause eventual consistency issues. using approach, write new objects, never overwrite them.

  • finally, might want consider amazon dynamodb. it's nosql database, can securely connect directly browser or server. provides replication, automatic scaling, load distribution (just s3). , have option tell dynamodb want perform strongly consistent reads (the default consistent reads; have change parameter consistent reads). dynamodb typically used "small" records, 20kb within range -- maximum size of record 400kb of today. might want check out: dynamodb faqs: consistency model of amazon dynamodb?