Re: [WebDNA] Apple Server

This WebDNA talk-list message is from

2013


It keeps the original formatting.
numero = 110358
interpreted = N
texte = --Apple-Mail=_407F1F84-7B55-46EB-854D-69DC8DC1F5E8 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Holy Hell Paul!!!=20 I think you may have led me on the right path! After doing a bunch of digging on the query_cache settings on MYSQL I = was able to figure out that our query cache (set to 256 megs) was = causing all sorts of issues. First, because of the 256 meg size it was causing huge slowdowns when it = was being thrashed about due to a table update (with all the hundreds of = thousands of entries needing to be selectively invalidated).=20 I was able to spend 24 hrs tuning it down to a spry 48 megs which meant = that never really pruned due to memory and yet held the needed 20,000 = queries in cache.=20 This ALONE seems to have stopped the server crushing slowdowns (at least = I am on 36 hrs now of no slowdowns). Beyond that I found that it would become fairly fragmented after 24 hrs = or so. Flushing the cache seemed to help that tremendously and speed it = up. During one forum data clean up it recorded 12,000 memory prunes due to = fragmentation most likely. BUT: Due to the nature of the code that VB uses, i found the number of cached = reads compared to new cache inserts was only about a 1:2.5 ratio. = Everything I was able to dig up these past 2 days says if you aren't = getting a 1:5 ratio or better you would probably serve faster without = it. Which is exactly where I am now. All 5 sites are serving noticeably faster without query_cache on. As I said I am on about 36 hours of no lockups which is some kind of = record for the past 6 months or longer. I am hopeful that this is actually the problem and if it holds up for a = week or so, i may go back in and increase the KeepAlive settings to more = reasonable numbers. I have them down to 1 second and 25 children because = they would exponentially complicate the problem when the slowdowns = occurred. Funny after all this time and all this pain, the answer seems to have = come from this list :-P Thanks Alex On Apr 25, 2013, at 12:02 PM, Paul Willis wrote: >=20 > On 25 Apr 2013, at 16:41, Alex McCombie = wrote: >=20 >> On Apr 25, 2013, at 11:20 AM, Paul Willis wrote: >>=20 >>> 1) MySQL would dump the expired query cache for a query that got a = lot of requests (as it should the data had been updated and the cache = was old) but it took too long to process and a queue of the same big = query quickly built up from which it could not recover, just as you = describe. >>>=20 >> Interesting, but how did you resolve this if it was indeed causing a = slowdown? >=20 > We changed the way the site worked for the panel that called that = query, it was a while ago now so my mind is a bit hazy but it was = something like a 'most viewed articles' panel. Instead of being 'live' = we made it cache for a day or similar. >=20 >=20 >>> Another option which might help (which we did) is to split the MySQL = off onto it's own server >>> Paul >>>=20 >> Also interesting, but if the problem was squarely on MYSQL, then I'm = not sure how having it separated would help all that much. Maybe free up = a few CPU ticks from apache. >=20 > The web server was tweaked to be better for apache, serving lots of = small files and the db server configured to be better for MySQL, more = RAM etc. >=20 >=20 >> The move to OSX was as much about getting me into something I had = more experience with admittedly as anything else.=20 >=20 > We found that with old versions of Apple Server (don't know what it's = like in later versions) we ended up having to learn terminal stuff to = bypass the limitations of the GUI tools. Once we got the hang of it and = became terminal geeks we found we actually preferred it that way. All = the stuff we learned in OS X terminal transferred over to Ubuntu nicely. >=20 >=20 >> I am intrigued though that you were having similar issues.=20 >>=20 >> What was your ultimate resolution? Did you find a bad query? Was it = the cache dump primarily? Or an overtaxed server? >> Chasing this thing has been like chasing shadows. You think you got = it and then 3 hrs later its back. >=20 > It wasn't one solution it was an ongoing combination of things. We = changed the way queries worked, we eliminated slow queries by rewriting = them properly, we took the load off the server by splitting it. >=20 > I do feel for you though. We had the same thing, thinking you had = nailed the issue but then it suddenly reappearing a few days later. As = you eliminate one issue the next one rears it's head and all the time = the goalposts are moving as traffic increases and new features are added = to the site. >=20 > Paul >=20 >=20 >=20 > --------------------------------------------------------- This message = is sent to you because you are subscribed to the mailing list . To = unsubscribe, E-mail to: archives: = http://mail.webdna.us/list/talk@webdna.us Bug Reporting: = support@webdna.us --Apple-Mail=_407F1F84-7B55-46EB-854D-69DC8DC1F5E8 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 Holy = Hell Paul!!! 
I think you may have led me on the right = path!

After doing a bunch of digging on the = query_cache settings on MYSQL I was able to figure out that our query = cache (set to 256 megs) was causing all sorts of = issues.

First, because of the 256 meg size it = was causing huge slowdowns when it was being thrashed about due to a = table update (with all the hundreds of thousands of entries needing to = be selectively invalidated). 

I was able = to spend 24 hrs tuning it down to a spry 48 megs which meant that never = really pruned due to memory and yet held the needed 20,000 queries in = cache. 

This ALONE seems to have stopped = the server crushing slowdowns (at least I am on 36 hrs now of no = slowdowns).

Beyond that I found that it would = become fairly fragmented after 24 hrs or so. Flushing the cache seemed = to help that tremendously and speed it up.
During one = forum data clean up it recorded 12,000 memory prunes due to = fragmentation most = likely.


BUT:

<= /div>
Due to the nature of the code that VB uses, i found the number = of cached reads compared to new cache inserts was only about a 1:2.5 = ratio. Everything I was able to dig up these past 2 days says if you = aren't getting a 1:5 ratio or better you would probably serve faster = without it.


Which is exactly = where I am now.
All 5 sites are serving noticeably faster = without query_cache on.
As I said I am on about 36 hours of no = lockups which is some kind of record for the past 6 months or = longer.

I am hopeful that this is actually the = problem and if it holds up for a week or so, i may go back in and = increase the KeepAlive settings to more reasonable numbers. I have them = down to 1 second and 25 children because they would exponentially = complicate the problem when the slowdowns = occurred.

Funny after all this time and all = this pain, the answer seems to have come from this list = :-P


Thanks
Alex



On Apr 25, 2013, at 12:02 = PM, Paul Willis <paul.willis@me.com> = wrote:


On 25 Apr 2013, at 16:41, Alex McCombie <info@adventureskies.com> = wrote:

On Apr 25, 2013, at 11:20 AM, Paul Willis <paul.willis@me.com> = wrote:

1) MySQL would dump the expired query cache for a query = that got a lot of requests (as it should the data had been updated and = the cache was old) but it took too long to process and a queue of the = same big query quickly built up from which it could not recover, just as = you describe.

Interesting, = but how did you resolve this if it was indeed causing a = slowdown?

We changed the way = the site worked for the panel that called that query, it was a while ago = now so my mind is a bit hazy but it was something like a 'most viewed = articles' panel. Instead of being 'live' we made it cache for a day or = similar.


Another option which might = help (which we did) is to split the MySQL off onto it's own = server
Paul

Also = interesting, but if the problem was squarely on MYSQL, then I'm not sure = how having it separated would help all that much. Maybe free up a few = CPU ticks from apache.

The = web server was tweaked to be better for apache, serving lots of small = files and the db server configured to be better for MySQL, more RAM = etc.


The move to OSX was as much = about getting me into something I had more experience with admittedly as = anything else. 

We found that = with old versions of Apple Server (don't know what it's like in later = versions) we ended up having to learn terminal stuff to bypass the = limitations of the GUI tools. Once we got the hang of it and became = terminal geeks we found we actually preferred it that way. All the stuff = we learned in OS X terminal transferred over to Ubuntu = nicely.


I am intrigued though that = you were having similar issues. 

What was = your ultimate resolution? Did you find a bad query? Was it the cache = dump primarily? Or an overtaxed server?
Chasing this thing has = been like chasing shadows. You think you got it and then 3 hrs later its = back.

It wasn't one solution it = was an ongoing combination of things. We changed the way queries worked, = we eliminated slow queries by rewriting them properly, we took the load = off the server by splitting it.

I do feel for = you though. We had the same thing, thinking you had nailed the issue but = then it suddenly reappearing a few days later.  As you = eliminate one issue the next one rears it's head and all the time the = goalposts are moving as traffic increases and new features are added to = the = site.

Paul



--------------------------------------------------------- This message is sent to you because you are subscribed to the mailing list . To unsubscribe, E-mail to: archives: http://mail.webdna.us/l= ist/talk@webdna.us Bug Reporting: support@webdna.us

= --Apple-Mail=_407F1F84-7B55-46EB-854D-69DC8DC1F5E8-- Associated Messages, from the most recent to the oldest:

    
  1. Re: [WebDNA] Apple Server (Paul Willis 2013)
  2. Re: [WebDNA] Apple Server (Alex McCombie 2013)
  3. Re: [WebDNA] Apple Server (Paul Willis 2013)
  4. Re: [WebDNA] Apple Server (Alex McCombie 2013)
  5. Re: [WebDNA] Apple Server (Paul Willis 2013)
  6. Re: [WebDNA] Apple Server (Alex McCombie 2013)
  7. Re: [WebDNA] Apple Server (Paul Willis 2013)
  8. Re: [WebDNA] Apple Server (Alex McCombie 2013)
  9. Re: [WebDNA] Apple Server (Donovan Brooke 2013)
  10. [WebDNA] Apple Server (Alex McCombie 2013)
--Apple-Mail=_407F1F84-7B55-46EB-854D-69DC8DC1F5E8 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Holy Hell Paul!!!=20 I think you may have led me on the right path! After doing a bunch of digging on the query_cache settings on MYSQL I = was able to figure out that our query cache (set to 256 megs) was = causing all sorts of issues. First, because of the 256 meg size it was causing huge slowdowns when it = was being thrashed about due to a table update (with all the hundreds of = thousands of entries needing to be selectively invalidated).=20 I was able to spend 24 hrs tuning it down to a spry 48 megs which meant = that never really pruned due to memory and yet held the needed 20,000 = queries in cache.=20 This ALONE seems to have stopped the server crushing slowdowns (at least = I am on 36 hrs now of no slowdowns). Beyond that I found that it would become fairly fragmented after 24 hrs = or so. Flushing the cache seemed to help that tremendously and speed it = up. During one forum data clean up it recorded 12,000 memory prunes due to = fragmentation most likely. BUT: Due to the nature of the code that VB uses, i found the number of cached = reads compared to new cache inserts was only about a 1:2.5 ratio. = Everything I was able to dig up these past 2 days says if you aren't = getting a 1:5 ratio or better you would probably serve faster without = it. Which is exactly where I am now. All 5 sites are serving noticeably faster without query_cache on. As I said I am on about 36 hours of no lockups which is some kind of = record for the past 6 months or longer. I am hopeful that this is actually the problem and if it holds up for a = week or so, i may go back in and increase the KeepAlive settings to more = reasonable numbers. I have them down to 1 second and 25 children because = they would exponentially complicate the problem when the slowdowns = occurred. Funny after all this time and all this pain, the answer seems to have = come from this list :-P Thanks Alex On Apr 25, 2013, at 12:02 PM, Paul Willis wrote: >=20 > On 25 Apr 2013, at 16:41, Alex McCombie = wrote: >=20 >> On Apr 25, 2013, at 11:20 AM, Paul Willis wrote: >>=20 >>> 1) MySQL would dump the expired query cache for a query that got a = lot of requests (as it should the data had been updated and the cache = was old) but it took too long to process and a queue of the same big = query quickly built up from which it could not recover, just as you = describe. >>>=20 >> Interesting, but how did you resolve this if it was indeed causing a = slowdown? >=20 > We changed the way the site worked for the panel that called that = query, it was a while ago now so my mind is a bit hazy but it was = something like a 'most viewed articles' panel. Instead of being 'live' = we made it cache for a day or similar. >=20 >=20 >>> Another option which might help (which we did) is to split the MySQL = off onto it's own server >>> Paul >>>=20 >> Also interesting, but if the problem was squarely on MYSQL, then I'm = not sure how having it separated would help all that much. Maybe free up = a few CPU ticks from apache. >=20 > The web server was tweaked to be better for apache, serving lots of = small files and the db server configured to be better for MySQL, more = RAM etc. >=20 >=20 >> The move to OSX was as much about getting me into something I had = more experience with admittedly as anything else.=20 >=20 > We found that with old versions of Apple Server (don't know what it's = like in later versions) we ended up having to learn terminal stuff to = bypass the limitations of the GUI tools. Once we got the hang of it and = became terminal geeks we found we actually preferred it that way. All = the stuff we learned in OS X terminal transferred over to Ubuntu nicely. >=20 >=20 >> I am intrigued though that you were having similar issues.=20 >>=20 >> What was your ultimate resolution? Did you find a bad query? Was it = the cache dump primarily? Or an overtaxed server? >> Chasing this thing has been like chasing shadows. You think you got = it and then 3 hrs later its back. >=20 > It wasn't one solution it was an ongoing combination of things. We = changed the way queries worked, we eliminated slow queries by rewriting = them properly, we took the load off the server by splitting it. >=20 > I do feel for you though. We had the same thing, thinking you had = nailed the issue but then it suddenly reappearing a few days later. As = you eliminate one issue the next one rears it's head and all the time = the goalposts are moving as traffic increases and new features are added = to the site. >=20 > Paul >=20 >=20 >=20 > --------------------------------------------------------- This message = is sent to you because you are subscribed to the mailing list . To = unsubscribe, E-mail to: archives: = http://mail.webdna.us/list/talk@webdna.us Bug Reporting: = support@webdna.us --Apple-Mail=_407F1F84-7B55-46EB-854D-69DC8DC1F5E8 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 Holy = Hell Paul!!! 
I think you may have led me on the right = path!

After doing a bunch of digging on the = query_cache settings on MYSQL I was able to figure out that our query = cache (set to 256 megs) was causing all sorts of = issues.

First, because of the 256 meg size it = was causing huge slowdowns when it was being thrashed about due to a = table update (with all the hundreds of thousands of entries needing to = be selectively invalidated). 

I was able = to spend 24 hrs tuning it down to a spry 48 megs which meant that never = really pruned due to memory and yet held the needed 20,000 queries in = cache. 

This ALONE seems to have stopped = the server crushing slowdowns (at least I am on 36 hrs now of no = slowdowns).

Beyond that I found that it would = become fairly fragmented after 24 hrs or so. Flushing the cache seemed = to help that tremendously and speed it up.
During one = forum data clean up it recorded 12,000 memory prunes due to = fragmentation most = likely.


BUT:

<= /div>
Due to the nature of the code that VB uses, i found the number = of cached reads compared to new cache inserts was only about a 1:2.5 = ratio. Everything I was able to dig up these past 2 days says if you = aren't getting a 1:5 ratio or better you would probably serve faster = without it.


Which is exactly = where I am now.
All 5 sites are serving noticeably faster = without query_cache on.
As I said I am on about 36 hours of no = lockups which is some kind of record for the past 6 months or = longer.

I am hopeful that this is actually the = problem and if it holds up for a week or so, i may go back in and = increase the KeepAlive settings to more reasonable numbers. I have them = down to 1 second and 25 children because they would exponentially = complicate the problem when the slowdowns = occurred.

Funny after all this time and all = this pain, the answer seems to have come from this list = :-P


Thanks
Alex



On Apr 25, 2013, at 12:02 = PM, Paul Willis <paul.willis@me.com> = wrote:


On 25 Apr 2013, at 16:41, Alex McCombie <info@adventureskies.com> = wrote:

On Apr 25, 2013, at 11:20 AM, Paul Willis <paul.willis@me.com> = wrote:

1) MySQL would dump the expired query cache for a query = that got a lot of requests (as it should the data had been updated and = the cache was old) but it took too long to process and a queue of the = same big query quickly built up from which it could not recover, just as = you describe.

Interesting, = but how did you resolve this if it was indeed causing a = slowdown?

We changed the way = the site worked for the panel that called that query, it was a while ago = now so my mind is a bit hazy but it was something like a 'most viewed = articles' panel. Instead of being 'live' we made it cache for a day or = similar.


Another option which might = help (which we did) is to split the MySQL off onto it's own = server
Paul

Also = interesting, but if the problem was squarely on MYSQL, then I'm not sure = how having it separated would help all that much. Maybe free up a few = CPU ticks from apache.

The = web server was tweaked to be better for apache, serving lots of small = files and the db server configured to be better for MySQL, more RAM = etc.


The move to OSX was as much = about getting me into something I had more experience with admittedly as = anything else. 

We found that = with old versions of Apple Server (don't know what it's like in later = versions) we ended up having to learn terminal stuff to bypass the = limitations of the GUI tools. Once we got the hang of it and became = terminal geeks we found we actually preferred it that way. All the stuff = we learned in OS X terminal transferred over to Ubuntu = nicely.


I am intrigued though that = you were having similar issues. 

What was = your ultimate resolution? Did you find a bad query? Was it the cache = dump primarily? Or an overtaxed server?
Chasing this thing has = been like chasing shadows. You think you got it and then 3 hrs later its = back.

It wasn't one solution it = was an ongoing combination of things. We changed the way queries worked, = we eliminated slow queries by rewriting them properly, we took the load = off the server by splitting it.

I do feel for = you though. We had the same thing, thinking you had nailed the issue but = then it suddenly reappearing a few days later.  As you = eliminate one issue the next one rears it's head and all the time the = goalposts are moving as traffic increases and new features are added to = the = site.

Paul



--------------------------------------------------------- This message is sent to you because you are subscribed to the mailing list . To unsubscribe, E-mail to: archives: http://mail.webdna.us/l= ist/talk@webdna.us Bug Reporting: support@webdna.us

= --Apple-Mail=_407F1F84-7B55-46EB-854D-69DC8DC1F5E8-- Alex McCombie

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

[WebDNA] Help me install (WebDNA 7 fastcgi install on Lion 10.7.3) (2012) CAlendar (2003) WebCat2 beta 11 - new prefs ... (1997) Ship Cost Calculated via Subtotal (1998) Bug Report, maybe (1997) Erotic Sites (1997) WebCatalog memory error. (1998) how to get s repeatedly in and out of a form? (1999) RE: pricing continued (1998) Variable Math (1998) Clear command and ShoppingCart.tmpl (1997) tag request (1999) NT error logs (1997) syntax question, not in online refernce (1997) New Webcatalog for Mac (1997) No luck with taxes (1997) XML Woes Revisited (2004) RE: WebDNA-Talk searchable? (1997) Re[2]: Images (2000) Custom WebCat Prefs ... (1997)