In JVisualVM that looked something like this...

Since JVisualVM takes the number of cores into account, the JVM was hovering around the 50% of the overall CPU time for the machine. Using the top command gave a different picture of the same issue. In top, the java process was being reported to use around the 100% of CPU time (which it was on that core)...
The PID of the process was what I needed to get out of top and in this case it was 29329.
Just having the PID for the JVM was not enough though. Pressing the shift-h key combination in top switched to thread display mode which revealed more about what was consuming all of that CPU. In my case I had 3 threads taking up considerable amount of CPU time...
Even though there were 3 threads, looking closely at these results showed that the thread with the PID of 29523 had only used 1.71s of CPU time so was not likely the culprit. The other two were both on over 2mins of CPU time (the server was not up for that long) so were likely the culprits here.
Now I had enough information about the JVM and the threads it was time to use jstack to get a thread dump. This required the JVM PID to be passed in and I sent the output to a temporary file like so...
Now it was time to do some decimal to hexadecimal conversion. Since the stack trace lists thread IDs in hex and top reported them in decimal, I used this site to convert the two thread PIDs I was interested into hex. In my case the results were like this...
Now it was a matter of looking through the thread dump file and finding the threads using their IDs (by looking for nid=<PID> in the dump). I got two stack traces from that...
From here I could go on with my debugging, but from both the stack traces it was clear to me that something was wrong with the persistent file store. A little bit of the mystery was solved!
-i