Re: [WebDNA] two ideas for running a cluster of WebDNA servers
This WebDNA talk-list message is from 2019
It keeps the original formatting.
numero = 114879
interpreted = N
texte = 2507> One front-end server that will redirect the requests> through POST or GET to a cluster of back-end servers. When> the request is for writing, then only one of the back-end> servers is solicited (always the same), when is is for> reading, it could be any of the back-end servers.> > The "writing" server is the master, the others are slaves.> Every 30 sec, through rsync, the master is copied to a> slave, and the WebDNA slave reloads the databases in RAM.Actually in this situation the WebDNA dbs on the slave serversmust be flushed immediately BEFORE each rsync overwritesthem. If this happens every 30 seconds it will effectivelydefeat one of the main advantages of WebDNA -- its typicallysuperfast "read from RAM" performance -- because every updateddb would have to be read into RAM agai every 30 seconds.> Another idea would be the same front-end server with the> same backend servers, and instead of using rsync, the front> end server would POST the request to each one of the> back-end servers: no more master and slave.I did something similar many years ago, but instead offrontend and backend servers each server was equallyavailable via a sequential round-robin load distributionsystem. This eliminated the need to flush the dbs every timeone server posted an update to another server, because theywould all be posted to at virtually the same time (within afraction of a second from each other given a fast enoughnetwork connection).But when the internet connection between the servers goesdown it creates SERIOUS out-of-sync problems such as failureto login, failure to see the updates the visitors just made,etc. ... and this would absolutely require your next idea inorder to rebuild the dbs correctly when necessary:> Another idea would be to keep a log database of all the> writing requests so, by reading this log file, all the> "slaves" would get the same information. Databases could> even be rebuilt in case of necessity.A system like this would become very complex and error-pronein my opinion, thus unlikely worth the effort when non-WebDNAsolutions are readily available and have been tested andproven in terms of reliability for many years already.> I am sure there are other solutions. Any other idea?WebDNA's design is ideal for use on a single frontend server.To try and turn it into a backend server would probably suck,not just because of the amount of work required in an attemptto turn it into something it is not, but mostly because if itloses the inherent advantages it offers as a RAM-residentfrontend db server you'll end up with a system thatunderperforms the competition.I'm wondering why you're thinking about this?Are you trying to come up with a plan to "add value" toWebDNA and perhaps broaden its market by attempting to makeWebDNA a suitable system for websites that need more than oneserver in order to handle extreme loads?Regards,Kenneth GromeWebDNA Solutionshttp://www.webdnasolutions.comUrgent/Emergency Phone: (228) 222-2917Website, Database, Network, and Communication Systems---------------------------------------------------------This message is sent to you because you are subscribed tothe mailing list talk@webdna.usTo unsubscribe, E-mail to: talk-leave@webdna.usarchives: http://www.webdna.us/page.dna?numero=55Bug Reporting: support@webdna.us.
Associated Messages, from the most recent to the oldest:
2507> One front-end server that will redirect the requests> through POST or GET to a cluster of back-end servers. When> the request is for writing, then only one of the back-end> servers is solicited (always the same), when is is for> reading, it could be any of the back-end servers.> > The "writing" server is the master, the others are slaves.> Every 30 sec, through rsync, the master is copied to a> slave, and the WebDNA slave reloads the databases in RAM.Actually in this situation the WebDNA dbs on the slave serversmust be flushed immediately BEFORE each rsync overwritesthem. If this happens every 30 seconds it will effectivelydefeat one of the main advantages of WebDNA -- its typicallysuperfast "read from RAM" performance -- because every updateddb would have to be read into RAM agai every 30 seconds.> Another idea would be the same front-end server with the> same backend servers, and instead of using rsync, the front> end server would POST the request to each one of the> back-end servers: no more master and slave.I did something similar many years ago, but instead offrontend and backend servers each server was equallyavailable via a sequential round-robin load distributionsystem. This eliminated the need to flush the dbs every timeone server posted an update to another server, because theywould all be posted to at virtually the same time (within afraction of a second from each other given a fast enoughnetwork connection).But when the internet connection between the servers goesdown it creates SERIOUS out-of-sync problems such as failureto login, failure to see the updates the visitors just made,etc. ... and this would absolutely require your next idea inorder to rebuild the dbs correctly when necessary:> Another idea would be to keep a log database of all the> writing requests so, by reading this log file, all the> "slaves" would get the same information. Databases could> even be rebuilt in case of necessity.A system like this would become very complex and error-pronein my opinion, thus unlikely worth the effort when non-WebDNAsolutions are readily available and have been tested andproven in terms of reliability for many years already.> I am sure there are other solutions. Any other idea?WebDNA's design is ideal for use on a single frontend server.To try and turn it into a backend server would probably suck,not just because of the amount of work required in an attemptto turn it into something it is not, but mostly because if itloses the inherent advantages it offers as a RAM-residentfrontend db server you'll end up with a system thatunderperforms the competition.I'm wondering why you're thinking about this?Are you trying to come up with a plan to "add value" toWebDNA and perhaps broaden its market by attempting to makeWebDNA a suitable system for websites that need more than oneserver in order to handle extreme loads?Regards,Kenneth GromeWebDNA Solutionshttp://www.webdnasolutions.comUrgent/Emergency Phone: (228) 222-2917Website, Database, Network, and Communication Systems---------------------------------------------------------This message is sent to you because you are subscribed tothe mailing list talk@webdna.usTo unsubscribe, E-mail to: talk-leave@webdna.usarchives: http://www.webdna.us/page.dna?numero=55Bug Reporting: support@webdna.us.
WebDNA Solutions
DOWNLOAD WEBDNA NOW!
Top Articles:
Talk List
The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...
Related Readings:
Generating Report Totals (1997)
Email Problem (2006)
I give up!! (1997)
Location of Browser Info.txt file (1997)
encripted storage (1998)
database paths/names, and a typo (1997)
Web Browser %Numbers - A must read for web developers (2003)
WebCatalog stalls (1998)
Creating a back button (1999)
Nested tags count question (1997)
Kaaaaahhhhhhhnnnnnnn! (1997)
Attention all list readers (1997)
[WebDNA] reCAPTCHA and WebDNA (2010)
Stinkin' [Referrer] (1998)
emailer w/F2 (1997)
Opinion: [input] should be called [output] ... (1997)
Quitting WebMerchant ? (1997)
PCS Emailer's role ? (1997)
international time (1997)
won't serve .tpl -index.tpl gone, made test.tpl (2000)