Why is Insight running unusually slow?
Performance issues can be notoriously difficult to diagnose due to the shear number of factors involved. If you are unhappy with the performance or suspect something is wrong, our advice is to send us diagnostic logs while Insight is slow or unresponsive (see Viewing and Sending Diagnostic Logs). We will often get a response to you within minutes.
However, it may be useful to understand how Insight runs on your machine and what might be causing it to perform poorly. This could help you troubleshoot the problem yourself.
When you open a project, Insight only loads the basic details of the sessions and files (e.g. name and type of volume) into memory, not the whole product. The size of the project directory does NOT impact the performance of Insight. Generally, Insight should be as responsive for a 50TB project as it is for a 50MB project.
However, the size of a session – how many products must be loaded, what displays open up, etc – can impact the responsiveness of Insight and the time it takes to open the session. Insight only loads into memory what is required to fulfill the display. Hence, memory usage would always be low until you wanted to display something in Insight's Views (there are several exceptions to this rule, which are highlighted below).
When you are displaying data, Insight renders from an in-memory cache, which it loaded from your product (and then, just the parts it needed). So if the cache is much bigger than what it needs to display, it may have other information in memory and flicking between volumes can be very fast (since no more disk IO is required). If, however, your memory allocation is too small (see Allocating Memory), it will have to throw away the previous data in order to display the new. This will cause Insight to be slow when rendering new data.
Simply limiting the display of products to a reasonable amount by disabling them in the control panel will allow Insight to run a little faster (see Activating/Deactivating Items). Limiting a session to the products required – so that you are not updating a flattener you no longer use, for example – should also improve performance (see Irrelevant data below).
The following pointers are the most common reasons why Insight is not performing as well as it should be.
The amount of memory to allocate to Insight depends on the size and complexity of the project and how much RAM is available on your machine (see Allocating Memory).
As a precaution, you should leave approximately 2GB RAM for your operating system. If you run other applications simultaneously, you should also allow for these. If the entire computer begins to feel sluggish, you have probably chosen too large a number. It is generally better to allocate less if you are unsure. For example, on a machine with 16GB of total RAM, you should allocate no more than 14GB for Insight.
For recommended hardware requirements, please refer to System Requirements.
Storing volumes, horizons or projects on an external USB or network drive could cause slowness issues if the I/O capabilities are not up to scratch. Generally we recommend 30 MB/s sustained transfer capable connections.
Process volumes are generated on the fly by processes in Insight. Processes can be fed into one-another so that the end volume is the product of several individual processes.
For example, an Incoherence volume might be constructed using: Dip process -> SOF process -> Incoherence process
Viewing the final Incoherence Process is computationally intensive and may be slow to display. In situations such as this, you can to export the final process volume to disk to retain performance (see Exporting a Volume to DUG I/O).
Phase rotation is also a computationally intensive operation. Propagating horizon picks on a phase rotated volume will be far slower than usual because Insight must perform a phase rotation for each trace prior to propagating new picks.
Phase rotated volumes are process volumes and should be exported and saved on disk (see Exporting a Volume to DUG I/O).
We have experienced some cases where having a 2D survey comprising very closely spaced tie-points has resulted in performance issues. The reason for overly dense tie-points could de due to poor, erratic navigation data in the original 2D SEG-Y files. You can check the number of tie-points defining each 2D line by exporting the survey file and opening in a text editor such as wordpad. If you suspect this may be the cause, we may need to downsample your survey for you. Or you can regenerate the survey file by re-loading in the SEG-Y files, and at the Survey creation screen, increase the maximum error from its default 10m to 20m.
Picking a horizon that is being used in another process such as a flattener can cause slowness because the flattening process updates with changes to the horizon.
Viewing TWT volumes in the TVD domain will place more demand on your machine due to the depth conversion process involved. Furthermore, it is important to ensure that the velocity volume used is not too densely sampled.
The standard practice in DUG's service division is to use a velocity model sampled at around 100m x 100m x 24ms/20m. If your velocity volume is more dense than this, you may wish to consider downsampling using the velocity conversion process.
Insight only loads data into memory that is required to satisfy the field of view. Zooming in on the map and section views will reduce the field of view and consequently the amount memory and processing time required.
As a last resort, any datasets that are not actively being used (such as contours, culture files, and non-essential horizons) can be hidden using the traffic light controls in the control panel. Less information displayed means less demand on your machine, and the faster Insight can perform.