mysql - How to increase database performance if there is 0.1M traffic -


i developing site , i'm concerned performance.

in current system there transactions adding 10,000 rows single table. doesn't matter took around 0.6 seconds insert.

but worrying happens if there 100,000 concurrent users , 1000 of users want add 10,000 rows single table @ once.

how impact performance compared single user? how can improve these transactions if there large amount of traffic in situation?

when write speed mandatory, way tackle getting quicker hard drives. mentioned transactions, means need data durable (d of acid). requirement rules out myisam storage engine or type of nosql i'll focus answer towards goes on relational databases.

the way works this: set number of input output operations per second or iops per hard drive. hard drives have metric called bandwith. metric interested in write speed. crude calculation here - number of mb per second divided number of iops = how data can squeeze per iops.

for mechanical drives, magic iops number anywhere between 150 , 300 - quite low. given bandwith of 100 mb/sec, real small number of writes , bandwith per write. solid state drives kick in - iops number starts @ 5 000 (some go 80 000) awesome databases.

connecting these drives in raid gives super quick storage solution. if able squeeze 10 000 inserts 1 transaction, disk try squeeze 10k inserts through 1 iops.

another strategy partitioning table , having multiple drives mysql stores data.

this far can go single mysql installation. there strategies distributing data multiple mysql nodes etc. assume that's out of scope of question.

tl;dr: need quicker disks.


Comments

Popular posts from this blog

c++ - Difference between pre and post decrement in recursive function argument -

php - Nothing but 'run(); ' when browsing to my local project, how do I fix this? -

php - How can I echo out this array? -