mysql - socket.io performance one emit per database row -


i trying understand best way read , send huge amount of database rows (50k-100k) client.

  1. should read rows @ once database @ backend , send rows in json format? isn't responsive user waiting long time, faster small no. of rows.

  2. should stream rows database , upon each reading of row database, call socket.emit()? causes many socket emits, more responsive, slow...

i using node.js, socket.io

rethink interface

first off, user interface design shows 50-100k rows on client not best user interface in first place. not large amount of data send down client , client manage , perhaps impractical in mobile devices, it's way more rows single user going read in given interaction page. so, first order might rethink user interface design , create sort of more demand-driven interface (paged, virtual scroll, keyed letter, etc...). there lots of different possibilities different (and better) user interface design lessens data transfer amount. design best depends entirely upon data , usage models user.

send data in chunks

that said, if going transfer data client, you're going want send in chunks (groups of rows @ time). idea chunks send consumable amount of data in 1 chunk such client can parse it, process it, show results , ready next chunk. client can stay active whole time since has cycles available between chunks process other user events. but, sending in chunks reduces overhead of sending separate message each single row. if server using compression, chunks gives greater chance compression efficiency too. how big chunk should (e.g. how many rows of data should contain) depends upon bunch of factors , best determined through experimentation clients or lowest power expected client. example, might want send 100 rows per message.

use efficient transfer format data

and, if you're using socket.io transfer large amounts of data, may want revisit how use json format. example, sending 100,000 objects repeat same property names not efficient. can invent own optimizations avoid repeating property names same in every object. example, rather sending 100,000 of these:

 {"firstname": "john", "lastname": "bundy", "state": "az", "country": "us"} 

if every single object has exact same properties, can either code property names own code or send property names once , send comma separated list of values in array receiving code can put object appropriate property names:

 ["john", "bundy", "az", "us"] 

data size can reduced 2-3x removing redundant information.


Comments

Popular posts from this blog

c++ - Difference between pre and post decrement in recursive function argument -

php - Nothing but 'run(); ' when browsing to my local project, how do I fix this? -

php - How can I echo out this array? -