DbManager/MySQL connections - what would create so many?

Post here your questions about Actionscript and Java server side extensions development.

Moderators: Lapo, Bax

Molix
Posts: 40
Joined: 01 Apr 2009, 19:34
Location: Toronto, ON, Canada

DbManager/MySQL connections - what would create so many?

Postby Molix » 02 Aug 2009, 18:17

Hello,

We recently encountered some failed queries (executeQuery returning null), and checking the logs indicates that SFS has run out of connections ("Cannot get a connection, pool exhausted").

The default is 10 I think, so I figured ok, no problem, I'll increase it. But it was already set to 100. I've set it to "grow" now, but it is irrelevant because the MySQL server is set to 100 too.

But that's not the point: I am wondering how I could possibly have so many in use. I have ~6 zone extensions, and 1 room extension x 5 rooms tops (the most we've gotten to in testing so far). Currently each extension has the seemingly typical init() { ... db = currZone.dbManager: ... }. So maybe 12 there.

I don't do much beyond using executeQuery/executeCommand, though I do have one general "DbInsert" routine that uses the db.getConnection().createStatement so that I can get the auto-incremented key back. Could this be causing "leaked" connections? Is there a proper way to dispose of one of these?

I must be doing something wrong, but how can I track it down? Is there a way to see how many connections are in use, and who owns them?

Thanks in advance.
User avatar
Lapo
Site Admin
Posts: 23008
Joined: 21 Mar 2005, 09:50
Location: Italy

Postby Lapo » 02 Aug 2009, 19:39

Hello,

The default is 10 I think, so I figured ok, no problem, I'll increase it. But it was already set to 100. I've set it to "grow" now, but it is irrelevant because the MySQL server is set to 100 too.

Sorry, I don't understand why you say it's irrelevant...
Actually setting the pool to "grow" makes sense because this eliminates the exception causing the problem you have mentioned.
The only side-effect is that in this way the connection pool might grow indefinitely and take significant memory resource.
You might need to adjust the memory settings of the JVM in case you run short on resources.
I don't do much beyond using executeQuery/executeCommand, though I do have one general "DbInsert" routine that uses the db.getConnection().createStatement so that I can get the auto-incremented key back. Could this be causing "leaked" connections? Is there a proper way to dispose of one of these?

Ah yes! That's very likely to be a leak.
The reason is that by doing so you don't keep a reference to the obtained connection object and you'll never return it to the pool causing a incremental growth.

Each and every time you grab a connection manually keep a reference to it and when you are finished with it call the close() method.

Greetings
Lapo
--
gotoAndPlay()
...addicted to flash games
Molix
Posts: 40
Joined: 01 Apr 2009, 19:34
Location: Toronto, ON, Canada

Postby Molix » 02 Aug 2009, 20:53

Thanks for the reply Lapo.

Sorry, I don't understand why you say it's irrelevant...
Actually setting the pool to "grow" makes sense because this eliminates the exception causing the problem you have mentioned.


I just thought that, yes, SFS's connection pool will be able to grow past 100, but it doesn't matter since the MySQL server will refuse more than 100 connections anyway (max_connections), presumably triggering a different error. At least that's what I thought.

And even if it did grow auto-magically, I'd really like to know why :)

This is the "generic" DbInsert function I use, which is the only one that calls getConnection(). Maybe the fact that it is static is bad? I only did that so I could call it from the other extensions conveniently.

Code: Select all

/**
    * Execute the given SQL INSERT statement and return the auto-incremented value
    * @param sql An INSERT SQL statement
    * @return Auto-incremented value of the inserted record
    */
   public static int DbInsert( String sql )
   {
      java.sql.Statement stmt = null;
      java.sql.ResultSet rs = null;
      int autoIncKeyFromApi = -1;
      
      try
      {
         stmt = _instance.db.getConnection().createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_UPDATABLE);
         stmt.executeUpdate(sql, java.sql.Statement.RETURN_GENERATED_KEYS);
          rs = stmt.getGeneratedKeys();
          if( rs.next() )
          {
              autoIncKeyFromApi = rs.getInt(1);
          }
          else
          {
             _instance.trace("Failed to get key of inserted record: \n" + sql);
          }

          rs.close();
          rs = null;
      }
      catch( java.sql.SQLException e )
      {
         _instance.trace("Exception during insert: \n" + e.getMessage());
      }
      finally
      {
         if (rs != null)
         {
              try { rs.close(); } catch (java.sql.SQLException ex) { /* ignore */ }
          }

          if (stmt != null)
          {
              try { stmt.close(); } catch (java.sql.SQLException ex) { /* ignore */ }
          }
      }
      
      return autoIncKeyFromApi;
   }


Any glaring problems?
Molix
Posts: 40
Joined: 01 Apr 2009, 19:34
Location: Toronto, ON, Canada

Postby Molix » 02 Aug 2009, 20:58

I think I get what you're saying. I'm clearing the statement and resultset, but not the connection (since I didn't assign it to anything). Should it be more like:

Code: Select all

java.sql.Connection conn = _instance.db.getConnection();
stmt = conn.createStatement(...)

...

finally
{
  if( conn != null )
  {
    conn.close();
    conn = null;
  }
}



?
User avatar
Lapo
Site Admin
Posts: 23008
Joined: 21 Mar 2005, 09:50
Location: Italy

Postby Lapo » 03 Aug 2009, 06:12

Yes, exactly!

cheers
Lapo

--

gotoAndPlay()

...addicted to flash games
kokoon
Posts: 17
Joined: 10 Apr 2008, 13:43

Postby kokoon » 03 Aug 2009, 07:28

hi!

we're having similar problems, it looks like there is some kind of a leak in terms of open connections.

we're using getConnection() because we need prepared statements. we always close the connections.

before we set the pool to "grow", we would getting "Cannot get a connection, pool exhausted", even after setting the Active to 200 and Idle to 65535.

after setting it to "grow", we started getting "Too many open files". i know i can raise ulimit, but i don't like this. there's a leak somewhere.

so what i'm asking: can you please explain in detail what the following dbmanager settings do:

- MaxActive - what is an "Active" connection? for how long does the connection stay active after it's been closed? what happens when this limit is reached?

- MaxIdle - what is an "Idle" connection? why is there a separate setting for this? what happens when there are more idle connections than this limit?


i understand how various OnExhaustedPool settings work, and BlockTime also. but i'd really like to know how to set the pool, so i don't have to make it grow indefinitely.

thanks!
User avatar
Lapo
Site Admin
Posts: 23008
Joined: 21 Mar 2005, 09:50
Location: Italy

Postby Lapo » 03 Aug 2009, 08:58

If you use the getConnection() method you are responsible for returning the connections to the pool by explicitly calling the close() method.
Make sure that your code is doing this properly.

after setting it to "grow", we started getting "Too many open files". i know i can raise ulimit, but i don't like this. there's a leak somewhere.

Of course, this is expected and raising the ulimit is very important not only for the database but also for your clients. If you don't you might refuse many connections to the server itself.

MaxActive = the max number of connections doing something, in other words actively talking to the database.

MaxIdle = the max number of connections not doing something. These can be reused by the pool when necessary.

"Grow" is useful when you can't predict the average database load, which is quite common. When using this modality is usually recommended to increase the default memory settings of the JVM to some higher settings like 256Mb, 512Mb or even higher depending on your application.

Bottom line, there's no golden rule for setting the database pool because it is very application specific, so monitoring the database traffic for a while and adjusting the settings is usually recommendable.
Lapo

--

gotoAndPlay()

...addicted to flash games
kokoon
Posts: 17
Joined: 10 Apr 2008, 13:43

Postby kokoon » 04 Aug 2009, 10:14

i have just checked, we definitely call connection.close() for every and each connection we get from DbManager.getConnection().

for 328 calls to getConnection, close() was called 326 times, before the server stopped working because of "too many open files" errors, indicating that 2 games were still running then.

also, ulimit was set to 20000, exactly the way it's shown here.

there definitely weren't more than 200 users online at any time, but there were some calls to DbManager.executeQuery() and DbManager.executeCommand().

this problem with too many open files started appearing only after we added some more calls to DbManager.getConnection()!

this is urgent, i need help right away!
kokoon
Posts: 17
Joined: 10 Apr 2008, 13:43

Postby kokoon » 04 Aug 2009, 10:44

update:

looks like this problem was due to an DOS attack - they started targeting the Smartfox now, instead of just Apache on port 80. this has opened many connections, which in turn caused the too many open files.

i hope the DB manager is working correctly now, i'll keep monitoring it.
User avatar
Lapo
Site Admin
Posts: 23008
Joined: 21 Mar 2005, 09:50
Location: Italy

Postby Lapo » 04 Aug 2009, 12:25

Ok!
Lapo

--

gotoAndPlay()

...addicted to flash games

Return to “Server Side Extension Development”

Who is online

Users browsing this forum: No registered users and 29 guests