Heap dumps contain a snapshot of all the live objects that are being used by a running Java™ application on the Java heap. You can obtain detailed information for each object instance, such as the address, type, class name, or size, and whether the instance has references to other objects.

There are two formats for heap dumps; the classic format and the Portable Heap Dump (PHD) format, which is the default. Whilst the classic format is generated in ascii text and can be read, the PHD format is binary and and must be processed for analysis.

Obtaining dumps

Heap dumps are generated by default in PHD format when the Java heap runs out of space. If you want to trigger the production of a heap dump in response to other situations, or in classic format, you can use one of the following options:

  • Configure the heap dump agent. For more information, see the -Xdump option.
  • Use the com.ibm.jvm.Dump API programmatically in your application code. For more information, see the JVM diagnostic utilities API documentation .

Analyzing dumps

The best method to analyze a PHD heap dump is to use the Eclipse Memory Analyzer™ tool (MAT) or the IBM Memory Analyzer tool . These tools process the dump file and provide a visual representation of the objects in the Java Heap. Both tools require the Diagnostic Tool Framework for Java (DTFJ) plugin. To install the DTFJ plugin in the Eclipse IDE, select the following menu items:

The following sections contain detailed information about the content of each type of heap dump file.

Portable Heap Dump (PHD) format

A PHD format dump file contains a header section and a body section. The body section can contain information about object, array, or class records. Primitive numbers are used to describe the file format, as detailed in the following table:

General structure

The following structure comprises the header section of a PHD file:

  • A UTF string indicating that the file is a portable heap dump
  • An int containing the PHD version number
  • 1 indicates that the word length is 64-bit.
  • 2 indicates that all the objects in the dump are hashed. This flag is set for heap dumps that use 16-bit hash codes. Eclipse OpenJ9™ heap dumps use 32-bit hash codes that are created only when used. For example, these hash codes are created when the APIs Object.hashCode() or Object.toString() are called in a Java application. If this flag is not set, the presence of a hash code is indicated by the hash code flag on the individual PHD records.
  • 4 indicates that the dump is from an OpenJ9 VM.
  • A byte containing a tag with a value of 1 that indicates the start of the header.
  • header tag 1 - not used
  • header tag 2 - indicates the end of the header
  • header tag 3 - not used
  • header tag 4 - indicates the VM version (Variable length UTF string)

The body of a PHD file is indicated by a byte that contains a tag with a value of 2, after which there are a number of dump records. Dump records are preceded by a 1 byte tag with the following record types:

  • Short object: 0x80 bit of the tag is set
  • Medium object: 0x40 bit of the tag is set (top bit value is 0)
  • Primitive Array: 0x20 bit if the tag is set (all other tag values have the top 3 bits with a value of 0)
  • Long record: tag value is 4
  • Class record: tag value is 6
  • Long primitive array: tag value is 7
  • Object array: tag value is 8

These records are described in more detail in the sections that follow.

The end of the PHD body is indicated by a byte that contains a tag with a value of 3.

Object records

Object records can be short, medium, or long, depending on the number of object references in the heap dump.

1. Short object record

The following information is contained within the tag byte:

The 1 byte tag, which consists of the following bits:

A byte or a short containing the gap between the address of this object and the address of the preceding object. The value is signed and represents the number of 32-bit words between the two addresses. Most gaps fit into 1 byte.

  • If all objects are hashed, a short containing the hash code.
  • The array of references, if references exist. The tag shows the number of elements, and the size of each element. The value in each element is the gap between the address of the references and the address of the current object. The value is a signed number of 32-bit words. Null references are not included.

2. Medium object record

These records provide the actual address of the class rather than a cache index. The following format is used:

The 1 byte tag, consisting of the following bits:

A byte or a short containing the gap between the address of this object and the address of the preceding object (See the Short object record description)

  • A word containing the address of the class of this object.
  • The array of references (See the Short object record description).

3. Long object record

This record format is used when there are more than 7 references, or if there are extra flags or a hash code. The following format is used:

The 1 byte tag, containing the value 4.

A byte containing flags, consisting of the following bits:

A byte , short , int , or long containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • If all objects are hashed, a short containing the hash code. Otherwise, an optional int containing the hash code if the hashed and moved bit is set in the record flag byte.
  • An int containing the length of the array of references.

Array records

PHD arrays can be primitive arrays or object arrays, as described in the sections that follow.

1. Primitive array record

The following information is contained in an array record:

byte , short , int or long containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • byte , short , int or long containing the array length.
  • An unsigned int containing the size of the instance of the array on the heap, including header and padding. The size is measured in 32-bit words, which you can multiply by four to obtain the size in bytes. This format allows encoding of lengths up to 16GB in an unsigned int .

2. Long primitive array record

This type of record is used when a primitive array has been hashed.

The 1 byte tag with a value of 7.

A byte containing the following flags:

a byte or word containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • a byte or word containing the array length.

3. Object array record

The following format applies:

The 1 byte tag with a value of 8.

A byte , short , int or long containing the gap between the address of this object and the address of the preceding object (See the Short object record format description).

  • A word containing the address of the class of the objects in the array. Object array records do not update the class cache.
  • If all objects are hashed, a short containing the hash code. If the hashed and moved bit is set in the records flag, this field contains an int .
  • An final int value is shown at the end. This int contains the true array length, shown as a number of array elements. The true array length might differ from the length of the array of references because null references are excluded.

Class records

The PHD class record encodes a class object and contains the following format:

The 1 byte tag, containing the value 6.

A byte, short , int or long containing the gap between the address of this class and the address of the preceding object (See the Short object record description).

  • An int containing the instance size.
  • A word containing the address of the superclass.
  • A UTF string containing the name of this class.
  • An int containing the number of static references.
  • The array of static references (See the Short object record description).

Classic Heap Dump format

Classic heap dumps are produced in ascii text on all platforms except z/OS, which are encoded in EBCDIC. The dump is divided into the following sections:

Header record

A single string containing information about the runtime environment, platform, and build levels, similar to the following example:

A record of each object instance in the heap with the following format:

The following object types ( object type ) might be shown:

  • class name (including package name)
  • class array type
  • primitive array type

These types are abbreviated in the record. To determine the type, see the Java VM Type Signature table .

Any references found are also listed, excluding references to an object's class or NULL references.

The following example shows an object instance (16 bytes in length) of type java/lang/String , with a reference to a char array:

The object instance (length 32 bytes) of type char array, as referenced from the java/lang/String , is shown in the following example:

The following example shows an object instance (24 bytes in length) of type array of java/lang/String :

A record of each class in the following format:

The following class types ( <class type> ) might be shown:

  • primitive array types

Any references found in the class block are also listed, excluding NULL references.

The following example shows a class object (80 bytes in length) for java/util/Date , with heap references:

Trailer record 1

A single record containing record counts, in decimal.

For example:

Trailer record 2

A single record containing totals, in decimal.

The values in the example reflect the following counts:

  • 7147 total objects
  • 22040 total references
  • (12379) total NULL references as a proportion of the total references count

Java VM Type Signatures

The following table shows the abbreviations used for different Java types in the heap dump records:

  • DTFJ interface

Andy Balaam's Blog

Four in the morning, still writing Free Software

How to analyse a .phd heap dump from an IBM JVM

Share on Mastodon

If you have been handed a .phd file which is a dump of the heap of an IBM Java virtual machine, you can analyse it using the Eclipse Memory Analyzer Tool (MAT), but you must install the IBM Monitoring and Diagnostic Tools first.

Download MAT from eclipse.org/mat/downloads.php . I suggest the Standalone version.

Unzip it and run the MemoryAnalyzer executable inside the zip. Add an argument to control how much memory it gets e.g. to give it 4GB:

Once it’s started, go to Help -> Install new software.

Next to “Work with” paste in the URL for the IBM Developer Toolkit update site: http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/runtimes/tools/dtfj/

Click Add…

Type in a name like “IBM Monitoring and Diagnostic Tools” and click OK.

In the list below, an item should appear called IBM Monitoring and Diagnostic Tools. Tick the box next to it, click Next, and follow the wizard to accept the license agreements and install the toolkit.

Restart Eclipse when prompted.

Choose File -> Open Heap Dump and choose your .phd file. It should open in MAT and allow you to figure out who is using all that memory.

9 thoughts on “How to analyse a .phd heap dump from an IBM JVM”

Very helpful guide.

Very nice buddy! Thank you!

If need any help on HEAPDUMP and JAVACORE for WebSphere products, please contact me!

https://www.linkedin.com/in/dougcardoso21/

Thanks Douglas!

Thanks for this… IBM product is garbage (no pun intended).

Thanks you !!

When I tried to update ini file to 4g, it did not open MAT. I needed to reset to what it was that is 1024m and then when I opened this phd file, it gave an error that Error opening heap dump is encountered. Does someone know what to do?

Very helpful , thanks.

Hi Poonam If you are on windows, type cmd in the search , go into command prompt. In the command prompt , change directory (cd) to the directory that MemoryAnalyser.exe is in. Then type MemoryAnalyzer -vmargs -Xmx4g and press enter.

  • Pingback: ¿Cómo crear un volcado de almacenamiento dinámico compatible con OpenJ9 a través de API? – stack

Leave a Reply

Your email address will not be published. Required fields are marked *

Don't subscribe All new comments Replies to my comments Notify me of followup comments via e-mail. You can also subscribe without commenting.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

  • Books Get Your Hands Dirty on Clean Architecture Stratospheric
  • Contribute Become an Author Writing Guide Author Workflow Author Payment
  • Services Book me Advertise
  • Categories Spring Boot Java Node Kotlin AWS Software Craft Simplify! Meta Book Reviews

Creating and Analyzing Java Heap Dumps

  • March 1, 2021

As Java developers, we are familiar with our applications throwing OutOfMemoryErrors or our server monitoring tools throwing alerts and complaining about high JVM memory utilization.

To investigate memory problems, the JVM Heap Memory is often the first place to look at.

To see this in action, we will first trigger an OutOfMemoryError and then capture a heap dump. We will next analyze this heap dump to identify the potential objects which could be the cause of the memory leak.

Example Code

What is a heap dump.

Whenever we create a Java object by creating an instance of a class, it is always placed in an area known as the heap. Classes of the Java runtime are also created in this heap.

The heap gets created when the JVM starts up. It expands or shrinks during runtime to accommodate the objects created or destroyed in our application.

When the heap becomes full, the garbage collection process is run to collect the objects that are not referenced anymore (i.e. they are not used anymore). More information on memory management can be found in the Oracle docs .

Heap dumps contain a snapshot of all the live objects that are being used by a running Java application on the Java heap. We can obtain detailed information for each object instance, such as the address, type, class name, or size, and whether the instance has references to other objects.

Heap dumps have two formats:

  • the classic format, and
  • the Portable Heap Dump (PHD) format.

PHD is the default format. The classic format is human-readable since it is in ASCII text, but the PHD format is binary and should be processed by appropriate tools for analysis.

Sample Program to Generate an OutOfMemoryError

To explain the analysis of a heap dump, we will use a simple Java program to generate an OutOfMemoryError :

We keep on allocating the memory by running a for loop until a point is reached, when JVM does not have enough memory to allocate, resulting in an OutOfMemoryError being thrown.

Finding the Root Cause of an OutOfMemoryError

We will now find the cause of this error by doing a heap dump analysis. This is done in two steps:

  • Capture the heap dump
  • Analyze the heap dump file to locate the suspected reason.

We can capture heap dump in multiple ways. Let us capture the heap dump for our example first with jmap and then by passing a VM argument in the command line.

Generating a Heap Dump on Demand with jmap

jmap is packaged with the JDK and extracts a heap dump to a specified file location.

To generate a heap dump with jmap , we first find the process ID of our running Java program with the jps tool to list down all the running Java processes on our machine:

Next, we run the jmap command to generate the heap dump file:

After running this command the heap dump file with extension hprof is created.

The option live is used to collect only the live objects that still have a reference in the running code. With the live option, a full GC is triggered to sweep away unreachable objects and then dump only the live objects.

Automatically Generating a Heap Dump on OutOfMemoryError s

This option is used to capture a heap dump at the point in time when an OutOfMemoryError occurred. This helps to diagnose the problem because we can see what objects were sitting in memory and what percentage of memory they were occupying right at the time of the OutOfMemoryError .

We will use this option for our example since it will give us more insight into the cause of the crash.

Let us run the program with the VM option HeapDumpOnOutOfMemoryError from the command line or our favorite IDE to generate the heap dump file:

After running our Java program with these VM arguments, we get this output:

As we can see from the output, the heap dump file with the name: hdump.hprof is created when the OutOfMemoryError occurs.

Other Methods of Generating Heap Dumps

Some of the other methods of generating a heap dump are:

jcmd : jcmd is used to send diagnostic command requests to the JVM. It is packaged as part of the JDK. It can be found in the \bin folder of a Java installation.

JVisualVM : Usually, analyzing heap dump takes more memory than the actual heap dump size. This could be problematic if we are trying to analyze a heap dump from a large server on a development machine. JVisualVM provides a live sampling of the Heap memory so it does not eat up the whole memory.

Analyzing the Heap Dump

What we are looking for in a Heap dump is:

  • Objects with high memory usage
  • Object graph to identify objects of not releasing memory
  • Reachable and unreachable objects

Eclipse Memory Analyzer (MAT) is one of the best tools to analyze Java heap dumps. Let us understand the basic concepts of Java heap dump analysis with MAT by analyzing the heap dump file we generated earlier.

We will first start the Memory Analyzer Tool and open the heap dump file. In Eclipse MAT, two types of object sizes are reported:

  • Shallow heap size : The shallow heap of an object is its size in the memory
  • Retained heap size : Retained heap is the amount of memory that will be freed when an object is garbage collected.

Overview Section in MAT

After opening the heap dump, we will see an overview of the application’s memory usage. The piechart shows the biggest objects by retained size in the overview tab as shown here:

PieChart

For our application, this information in the overview means if we could dispose of a particular instance of java.lang.Thread we will save 1.7 GB, and almost all of the memory used in this application.

Histogram View

While that might look promising, java.lang.Thread is unlikely to be the real problem here. To get a better insight into what objects currently exist, we will use the Histogram view:

histogram

We have filtered the histogram with a regular expression “io.pratik.* " to show only the classes that match the pattern. With this view, we can see the number of live objects: for example, 243 BrandedProduct objects, and 309 Price Objects are alive in the system. We can also see the amount of memory each object is using.

There are two calculations, Shallow Heap and Retained Heap. A shallow heap is the amount of memory consumed by one object. An Object requires 32 (or 64 bits, depending on the architecture) for each reference. Primitives such as integers and longs require 4 or 8 bytes, etc… While this can be interesting, the more useful metric is the Retained Heap.

Retained Heap Size

The retained heap size is computed by adding the size of all the objects in the retained set. A retained set of X is the set of objects which would be removed by the Garbage Collector when X is collected.

The retained heap can be calculated in two different ways, using the quick approximation or the precise retained size:

retainedheap

By calculating the Retained Heap we can now see that io.pratik.ProductGroup is holding the majority of the memory, even though it is only 32 bytes (shallow heap size) by itself. By finding a way to free up this object, we can certainly get our memory problem under control.

Dominator Tree

The dominator tree is used to identify the retained heap. It is produced by the complex object graph generated at runtime and helps to identify the largest memory graphs. An Object X is said to dominate an Object Y if every path from the Root to Y must pass through X.

Looking at the dominator tree for our example, we can see which objects are retained in the memory.

dominatortree

We can see that the ProductGroup object holds the memory instead of the Thread object. We can probably fix the memory problem by releasing objects contained in this object.

Leak Suspects Report

We can also generate a “Leak Suspects Report” to find a suspected big object or set of objects. This report presents the findings on an HTML page and is also saved in a zip file next to the heap dump file.

Due to its smaller size, it is preferable to share the “Leak Suspects Report” report with teams specialized in performing analysis tasks instead of the raw heap dump file.

The report has a pie chart, which gives the size of the suspected objects:

leakssuspectPieChart

For our example, we have one suspect labeled as “Problem Suspect 1” which is further described with a short description:

leakssuspects

Apart from the summary, this report also contains detailed information about the suspects which is accessed by following the “details” link at the bottom of the report:

leakssuspectdetails

The detailed information is comprised of :

Shortest paths from GC root to the accumulation point : Here we can see all the classes and fields through which the reference chain is going, which gives a good understanding of how the objects are held. In this report, we can see the reference chain going from the Thread to the ProductGroup object.

Accumulated Objects in Dominator Tree : This gives some information about the content which is accumulated which is a collection of GroceryProduct objects here.

In this post, we introduced the heap dump, which is a snapshot of a Java application’s object memory graph at runtime. To illustrate, we captured the heap dump from a program that threw an OutOfMemoryError at runtime.

We then looked at some of the basic concepts of heap dump analysis with Eclipse Memory Analyzer: large objects, GC roots, shallow vs. retained heap, and dominator tree, all of which together will help us to identify the root cause of specific memory issues.

phd file analyzer

Software Engineer, Consultant and Architect with current expertise in Enterprise and Cloud Architecture, serverless technologies, Microservices, and Devops.

Recent Posts

Apache HttpClient Configuration

Apache HttpClient Configuration

Sachin Raverkar

  • May 29, 2024

Configuring Apache HttpClient is essential for tailoring its behavior to meet specific requirements and optimize performance. From setting connection timeouts to defining proxy settings, configuration options allow developers to fine-tune the client’s behavior according to the needs of their application.

Async APIs Offered by Apache HttpClient

Async APIs Offered by Apache HttpClient

In this article, we are going to learn about the async APIs offered by Apache HttpClient. We are going to explore the different ways Apache HttpClient enables developers to send and receive data over the internet in asynchronous mode.

Classic APIs Offered by Apache HttpClient

Classic APIs Offered by Apache HttpClient

In this article, we are going to learn about the classic APIs offered by Apache HttpClient. We are going to explore the different ways Apache HttpClient helps us to send and receive data over the internet in classic (synchronous) mode.

phd file analyzer

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

PHD2 Guide Log Viewer

agalasso/phdlogview

Folders and files, repository files navigation, contributors 2.

  • CMake 19.5%
  • Inno Setup 1.0%

phd file analyzer

Memory Analyzer – Standalone

Eclipse memory analyzer – standalone installation.

In this tutorial let’s see how to

  • Download and start working on Eclipse Memory Analyzer – Standalone version.
  • Open a java heap dump created out of sun/oracle jdk (*.hprof) and a heap dump created out of IBM jdk (*.phd) files.

Search for “eclipse memory analyzer” and download “Windows (x86_64)” version (if the windows machine has 64 bit jdk) from https://eclipse.org/mat/downloads.php

pic1

Save file and unzip it.

pic2

Launch MemoryAnalyzer.exe

pic3

If the default java version is 1.7 or greater, MemoryAnalyzer will start without any issues.

pic4

Now, we are all set to open a heap dump (*.hprof) generated out of sun/oracle jdk. But before opening lets increase the Max Java heap size argument in “MemoryAnalyzer.ini”. (If needed).

-vmargs -Xmx1024m

Navigate to File -> Open Heap Dump . Select the hprof file.

pic5

Once we select the hprof file, it may take 15-20 minutes depending on the heap dump size and CPU of the local machine, to complete analyzing and open the report as shown below.

pic6

To Open a IBM JVM Heap dump – (Portable Heap Dump (phd) format) 

IBM heap dumps are generated in *.phd file format. To open *.phd heap dumps, we need to install IBM Diagnostic tool framework for java (dtfj), from the below URL.

http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/runtimes/tools/dtfj/

In Eclipse Memory Analyzer Window, Navigate to Help -> Install New Software and provide the dtfj url and press Enter.

pic7

Click Next twice, Accept the terms of the license agreements and then click Finish. IBM diagnostic tool framework will start installing. This may take 5-10 minutes. Once the installation is completed, press “Yes” to restart eclipse.

pic8

Once eclipse is restarted, we can now see *.phd files under known formats. To check this, navigate to File -> Open Heap Dump. Select the phd file.

pic9

Now the phd file will be loaded and analyzed. This step may take 15-20 minutes depending on the heap dump size.

pic10

Some general errors we may face during the initial use and solutions for them are provided below.

The above errors occur when Memory analyser was invoked with java 1.6. They disappear when Java 1.7 is used.

Heap Space.

Sometimes while parsing heap dumps, it fails in-between with the error heap space.

In such scenarios, increase the Xmx value in MemoryAnalyzer.ini and try.

Share this:

  • Configuring your system
  • JIT Compiler
  • JITServer technology
  • JITServer tuning
  • AOT Compiler
  • Java Attach API
  • System dump
  • Java 11 API
  • Java 17 API
  • Java 21 API
  • Java 22 API

Heap dumps contain a snapshot of all the live objects that are being used by a running Java™ application on the Java heap. You can obtain detailed information for each object instance, such as the address, type, class name, or size, and whether the instance has references to other objects.

There are two formats for heap dumps; the classic format and the Portable Heap Dump (PHD) format, which is the default. Whilst the classic format is generated in ascii text and can be read, the PHD format is binary and and must be processed for analysis.

Obtaining dumps

Heap dumps are generated by default in PHD format when the Java heap runs out of space. If you want to trigger the production of a heap dump in response to other situations, or in classic format, you can use one of the following options:

  • Configure the heap dump agent. For more information, see the -Xdump option.
  • Use the com.ibm.jvm.Dump API programmatically in your application code. For more information, see the JVM diagnostic utilities API documentation .

Analyzing dumps

The best method to analyze a PHD heap dump is to use the Eclipse Memory Analyzer™ tool (MAT) or the IBM Memory Analyzer tool . These tools process the dump file and provide a visual representation of the objects in the Java Heap. Both tools require the Diagnostic Tool Framework for Java (DTFJ) plugin. To install the DTFJ plugin in the Eclipse IDE, select the following menu items:

The following sections contain detailed information about the content of each type of heap dump file.

Portable Heap Dump (PHD) format

A PHD format dump file contains a header section and a body section. The body section can contain information about object, array, or class records. Primitive numbers are used to describe the file format, as detailed in the following table:

General structure

The following structure comprises the header section of a PHD file:

  • A UTF string indicating that the file is a portable heap dump
  • An int containing the PHD version number
  • 1 indicates that the word length is 64-bit.
  • 2 indicates that all the objects in the dump are hashed. This flag is set for heap dumps that use 16-bit hash codes. Eclipse OpenJ9™ heap dumps use 32-bit hash codes that are created only when used. For example, these hash codes are created when the APIs Object.hashCode() or Object.toString() are called in a Java application. If this flag is not set, the presence of a hash code is indicated by the hash code flag on the individual PHD records.
  • 4 indicates that the dump is from an OpenJ9 VM.
  • A byte containing a tag with a value of 1 that indicates the start of the header.
  • header tag 1 - not used
  • header tag 2 - indicates the end of the header
  • header tag 3 - not used
  • header tag 4 - indicates the VM version (Variable length UTF string)

The body of a PHD file is indicated by a byte that contains a tag with a value of 2, after which there are a number of dump records. Dump records are preceded by a 1 byte tag with the following record types:

  • Short object: 0x80 bit of the tag is set
  • Medium object: 0x40 bit of the tag is set (top bit value is 0)
  • Primitive Array: 0x20 bit if the tag is set (all other tag values have the top 3 bits with a value of 0)
  • Long record: tag value is 4
  • Class record: tag value is 6
  • Long primitive array: tag value is 7
  • Object array: tag value is 8

These records are described in more detail in the sections that follow.

The end of the PHD body is indicated by a byte that contains a tag with a value of 3.

Object records

Object records can be short, medium, or long, depending on the number of object references in the heap dump.

1. Short object record

The following information is contained within the tag byte:

The 1 byte tag, which consists of the following bits:

A byte or a short containing the gap between the address of this object and the address of the preceding object. The value is signed and represents the number of 32-bit words between the two addresses. Most gaps fit into 1 byte.

  • If all objects are hashed, a short containing the hash code.
  • The array of references, if references exist. The tag shows the number of elements, and the size of each element. The value in each element is the gap between the address of the references and the address of the current object. The value is a signed number of 32-bit words. Null references are not included.

2. Medium object record

These records provide the actual address of the class rather than a cache index. The following format is used:

The 1 byte tag, consisting of the following bits:

A byte or a short containing the gap between the address of this object and the address of the preceding object (See the Short object record description)

  • A word containing the address of the class of this object.
  • The array of references (See the Short object record description).

3. Long object record

This record format is used when there are more than 7 references, or if there are extra flags or a hash code. The following format is used:

The 1 byte tag, containing the value 4.

A byte containing flags, consisting of the following bits:

A byte , short , int , or long containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • If all objects are hashed, a short containing the hash code. Otherwise, an optional int containing the hash code if the hashed and moved bit is set in the record flag byte.
  • An int containing the length of the array of references.

Array records

PHD arrays can be primitive arrays or object arrays, as described in the sections that follow.

1. Primitive array record

The following information is contained in an array record:

byte , short , int or long containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • byte , short , int or long containing the array length.
  • An unsigned int containing the size of the instance of the array on the heap, including header and padding. The size is measured in 32-bit words, which you can multiply by four to obtain the size in bytes. This format allows encoding of lengths up to 16GB in an unsigned int .

2. Long primitive array record

This type of record is used when a primitive array has been hashed.

The 1 byte tag with a value of 7.

A byte containing the following flags:

a byte or word containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • a byte or word containing the array length.

3. Object array record

The following format applies:

The 1 byte tag with a value of 8.

A byte , short , int or long containing the gap between the address of this object and the address of the preceding object (See the Short object record format description).

  • A word containing the address of the class of the objects in the array. Object array records do not update the class cache.
  • If all objects are hashed, a short containing the hash code. If the hashed and moved bit is set in the records flag, this field contains an int .
  • An final int value is shown at the end. This int contains the true array length, shown as a number of array elements. The true array length might differ from the length of the array of references because null references are excluded.

Class records

The PHD class record encodes a class object and contains the following format:

The 1 byte tag, containing the value 6.

A byte, short , int or long containing the gap between the address of this class and the address of the preceding object (See the Short object record description).

  • An int containing the instance size.
  • A word containing the address of the superclass.
  • A UTF string containing the name of this class.
  • An int containing the number of static references.
  • The array of static references (See the Short object record description).

Classic Heap Dump format

Classic heap dumps are produced in ascii text on all platforms except z/OS, which are encoded in EBCDIC. The dump is divided into the following sections:

Header record

A single string containing information about the runtime environment, platform, and build levels, similar to the following example:

A record of each object instance in the heap with the following format:

The following object types ( object type ) might be shown:

  • class name (including package name)
  • class array type
  • primitive array type

These types are abbreviated in the record. To determine the type, see the Java VM Type Signature table .

Any references found are also listed, excluding references to an object's class or NULL references.

The following example shows an object instance (16 bytes in length) of type java/lang/String , with a reference to a char array:

The object instance (length 32 bytes) of type char array, as referenced from the java/lang/String , is shown in the following example:

The following example shows an object instance (24 bytes in length) of type array of java/lang/String :

A record of each class in the following format:

The following class types ( <class type> ) might be shown:

  • primitive array types

Any references found in the class block are also listed, excluding NULL references.

The following example shows a class object (80 bytes in length) for java/util/Date , with heap references:

Trailer record 1

A single record containing record counts, in decimal.

For example:

Trailer record 2

A single record containing totals, in decimal.

The values in the example reflect the following counts:

  • 7147 total objects
  • 22040 total references
  • (12379) total NULL references as a proportion of the total references count

Java VM Type Signatures

The following table shows the abbreviations used for different Java types in the heap dump records:

  • DTFJ interface
  • Manage Cookies
  • Working Groups
  • Marketplace
  • Planet Eclipse
  • Report a Bug
  • Mailing Lists
  • Documentation
  • Getting Started / Support
  • How to Contribute
  • IDE and Tools
  • Newcomer Forum

Participate

Eclipse IDE

Breadcrumbs

  • Eclipse Wiki

MemoryAnalyzer

Notice: this wiki is now read only and edits are no longer possible. please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/wiki-shutdown-plan for the plan..

  • View source
  • 2.1 Installation
  • 2.2 Basic Tutorials
  • 2.3 Further Reading
  • 3.1 HPROF dumps from Sun Virtual Machines
  • 3.2 System Dumps and Heap Dumps from IBM Virtual Machines
  • 3.3 What if the Heap Dump is NOT Written on OutOfMemoryError?
  • 4 Extending Memory Analyzer

The Eclipse Memory Analyzer tool (MAT) is a fast and feature-rich heap dump analyzer that helps you find memory leaks and analyze high memory consumption issues.

With Memory Analyzer one can easily

  • find the biggest objects, as MAT provides reasonable accumulated size (retained size)
  • explore the object graph, both inbound and outbound references
  • compute paths from the garbage collector roots to interesting objects
  • find memory waste, like redundant String objects, empty collection objects, etc...

Getting Started

Installation.

See the download page for installation instructions.

Basic Tutorials

Both the Basic Tutorial chapter in the MAT documentation and the Eclipse Memory Analyzer Tutorial by Lars Vogel are a good first reading, if you are just starting with MAT.

Further Reading

Check MemoryAnalyzer/Learning Material . You will find there a collection of presentations and web articles on Memory Analyzer, which are also a good resource for learning. These pages Querying Heap Objects (OQL) OQL Syntax MemoryAnalyzer/OQL also explain some of the ways to use Object Query Language (OQL)

Getting a Heap Dump

Hprof dumps from sun virtual machines.

The Memory Analyzer can work with HPROF binary formatted heap dumps . Those heap dumps are written by Sun HotSpot and any VM derived from HotSpot. Depending on your scenario, your OS platform and your JDK version, you have different options to acquire a heap dump.

Non-interactive

If you run your application with the VM flag -XX:+HeapDumpOnOutOfMemoryError a heap dump is written on the first Out Of Memory Error. There is no overhead involved unless a OOM actually occurs. This flag is a must for production systems as it is often the only way to further analyze the problem.

As per this article , the heap dump will be generated in the "current directory" of the JVM by default. It can be explicitly redirected with -XX:HeapDumpPath= for example -XX:HeapDumpPath=/disk2/dumps . Note that the dump file can be huge, up to Gigabytes, so ensure that the target file system has enough space.

Interactive

As a developer, you want to trigger a heap dump on demand. On Windows, use JDK 6 and JConsole . On Linux and Mac OS X , you can also use jmap that comes with JDK 5.

  • tutorial here

Via Java VM parameters:

  • -XX:+HeapDumpOnOutOfMemoryError writes heap dump on OutOfMemoryError (recommended)
  • -XX:+HeapDumpOnCtrlBreak writes heap dump together with thread dump on CTRL+BREAK
  • -agentlib:hprof=heap=dump,format=b combines the above two settings (old way; not recommended as the VM frequently dies after CTRL+BREAK with strange errors)
  • Sun (Linux, Solaris; not on Windows) JMap Java 5 : jmap -heap:format=b <pid>
  • Sun (Linux, Solaris; Windows see link) JMap Java 6 : jmap.exe -dump:format=b,file=HeapDump.hprof <pid>
  • Sun (Linus, Solaris) JMap with Core Dump File: jmap -dump:format=b,file=HeapDump.hprof /path/to/bin/java core_dump_file
  • Sun JConsole: Launch jconsole.exe and invoke operation dumpHeap() on HotSpotDiagnostic MBean
  • SAP JVMMon: Launch jvmmon.exe and call menu for dumping the heap

Heap dump will be written to the working directory.

System Dumps and Heap Dumps from IBM Virtual Machines

Memory Analyzer may read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files with the IBM DTFJ feature installed. Once installed, then File > Open Heap Dump should give the following options for the file types:

  • All known formats
  • HPROF binary heap dumps
  • IBM 1.4.2 SDFF
  • IBM Javadumps
  • IBM SDK for Java (J9) system dumps
  • IBM SDK for Java Portable Heap Dumps

For a comparison of dump types, see Debugging from dumps . System dumps are simply operating system core dumps; therefore, they are a superset of portable heap dumps. System dumps are far superior than PHDs, particularly for more accurate GC roots, thread-based analysis, and unlike PHDs, system dumps contain memory contents like HPROFs. Older versions of IBM Java (e.g. < 5.0SR12, < 6.0SR9) require running jextract on the operating system core dump which produced a zip file that contained the core dump, XML or SDFF file, and shared libraries. The IBM DTFJ feature still supports reading these jextracted zips; however, newer versions of IBM Java do not require jextract for use in MAT since DTFJ is able to directly read each supported operating system's core dump format. Simply ensure that the operating system core dump file ends with the .dmp suffix for visibility in the MAT Open Heap Dump selection. It is also common to zip core dumps because they are so large and compress very well. If a core dump is compressed with .zip , the IBM DTFJ feature in MAT is able to decompress the ZIP file and read the core from inside (just like a jextracted zip). The only significant downsides to system dumps over PHDs is that they are much larger, they usually take longer to produce, they may be useless if they are manually taken in the middle of an exclusive event that manipulates the underlying Java heap such as a garbage collection, and they sometimes require operating system configuration ( Linux , AIX ) to ensure non-truncation.

In recent versions of IBM Java (> 6.0.1), by default, when an OutOfMemoryError is thrown, IBM Java produces a system dump, PHD, javacore, and Snap file on the first occurrence for that process (although often the core dump is suppressed by the default 0 core ulimit on operating systems such as Linux). For the next three occurrences, it produces only a PHD, javacore, and Snap. If you only plan to use system dumps, and you've configured your operating system correctly as per the links above (particularly core and file ulimits), then you may disable PHD generation with -Xdump:heap:none. For versions of IBM Java older than 6.0.1, you may switch from PHDs to system dumps using -Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk -Xdump:heap:none

In addition to an OutOfMemoryError, system dumps may be produced using operating system tools (e.g. gcore in gdb for Linux, gencore for AIX, Task Manager for Windows, SVCDUMP for z/OS, etc.), using the IBM Java APIs , using the various options of -Xdump , using Java Surgery , and more.

Versions of IBM Java older than IBM JDK 1.4.2 SR12, 5.0 SR8a and 6.0 SR2 are known to produce inaccurate GC root information.

What if the Heap Dump is NOT Written on OutOfMemoryError?

Heap dumps are not written on OutOfMemoryError for the following reasons:

  • Application creates and throws OutOfMemoryError on its own
  • Another resource like threads per process is exhausted
  • C heap is exhausted

As for the C heap, the best way to see that you won't get a heap dump is if it happens in C code (eArray.cpp in the example below):

C heap problems may arise for different reasons, e.g. out of swap space situations, process limits exhaustion or just address space limitations, e.g. heavy fragmentation or just the depletion of it on machines with limited address space like 32 bit machines. The hs_err-file will help you with more information on this type of error. Java heap dumps wouldn't be of any help, anyways.

Also please note that a heap dump is written only on the first OutOfMemoryError. If the application chooses to catch it and continues to run, the next OutOfMemoryError will never cause a heap dump to be written!

Extending Memory Analyzer

Memory Analyzer is extensible, so new queries and dump formats can be added. Please see MemoryAnalyzer/Extending_Memory_Analyzer for details.

  • Tools Project
  • Memory Analyzer

This page was last modified 07:12, 28 December 2022 by Eclipsepedia anonymous user Unnamed Poltroon .

Back to the top

Eclipse Memory Analyzer (MAT) is a tool that is used to analyze a heap dump.

This assumes you have:

  • installed Memory Analyzer (MAT) in Eclipse
  • installed Memory Analyzer (MAT) on Linux

​ In Eclipse Memory Analyzer, select File > Open a Heap Dump and select the heap dump file. The heap dump file should end with the .hprof extension (e.g. java_pid3158.hprof).

Note: It is important to recognize that the heap dump file must have file extension HPROF. By default, a WebSphere heap dump has file extenion PHD (Portable Heap Dump), which means that by default, a WebSphere heap dump cannot be analyzed by Eclipse Memory Analyzer. To be able to analyze a WebSphere PHD file, you will need to install the IBM Diagnostic Tool Framework for Java (DTFJ) feature. Click here for instructions on how to add the IBM Diagnostic Tool Framework for Java (DTFJ) feature to Eclipse Memory Analyzer.

Once opened, the default view should display a pie chart.

phd file analyzer

Probably the most useful view in the tool is  Run Expert System Test > Heap Dump Overview  and  Run Expert System Test > Leak Suspects . For example, the Leak Suspects view identifies the classes in the heap dump that may be producing a memory leak.

phd file analyzer

Be aware that a number of files will be created in the same directory as the heapdump file being analyzed. The files will start with the text "heapdump". The files can be deleted after the heap dump has been analyzed.

Did you find this article helpful?

Add a comment.

File.org logo

Having problems opening a PHD file?

Learn about files with extension PHD

What are PHD files and how to open them

Are you having problems opening a PHD file or are you simply curious about its contents? We're here to explain the properties of these files and provide you with software that can open or handle your PHD files.

What is a PHD file?

PHD files have multiple uses , and PERQemu Hard Disk Image is one of them. Read more about the other uses further down the page.

  • PERQemu Hard Disk Image

We have not yet analyzed in detail what these files contain and what they are used for. We're working on it.

How to open PHD files

Important: Different programs may use files with the PHD file extension for different purposes, so unless you are sure which format your PHD file is, you may need to try a few different programs.

While we have not verified the apps ourselves yet, our users have suggested five different PHD openers which you will find listed below.

Last updated: April 25, 2024

All known file formats using extension .PHD

While PERQemu Hard Disk Image is a popular type of PHD-file, we know of 3 different uses of the .PHD file extension. Different software may use files with the same extension for different types of data.

3 known uses of the PHD file extension

PolyHedral Database

Portable Heap Dump Dump

We know that one PHD format is PolyHedral Database . We have not yet analyzed in detail what these files contain and what they are used for. We're working on it.

We know that one PHD format is Portable Heap Dump Dump . We have not yet analyzed in detail what these files contain and what they are used for. We're working on it.

Various apps that use files with this extension

These apps are known to open certain types of PHD files. Remember, different programs may use PHD files for different purposes , so you may need to try out a few of them to be able to open your specific file.

Help us help others

File.org helps thousands of users every day, and we would love to hear from you if you have additional information about PHD file formats, example files, or compatible programs. Please use the links below or email us at submit @ file . org to discuss further.

  • Update info
  •  · 
  • Upload example file
  • Suggest a program

Not sure exactly what type of file you are trying to open? Try our new File Analyzer . It is a free tool that can identify more than 11,000 different kinds of files - most likely yours too! It will help you find software that can handle your specific type of file. Download File Analyzer here .

An official website of the United States Government

  • Kreyòl ayisyen
  • Search Toggle search Search Include Historical Content - Any - No Include Historical Content - Any - No Search
  • Menu Toggle menu
  • INFORMATION FOR…
  • Individuals
  • Business & Self Employed
  • Charities and Nonprofits
  • International Taxpayers
  • Federal State and Local Governments
  • Indian Tribal Governments
  • Tax Exempt Bonds
  • FILING FOR INDIVIDUALS
  • How to File
  • When to File
  • Where to File
  • Update Your Information
  • Get Your Tax Record
  • Apply for an Employer ID Number (EIN)
  • Check Your Amended Return Status
  • Get an Identity Protection PIN (IP PIN)
  • File Your Taxes for Free
  • Bank Account (Direct Pay)
  • Payment Plan (Installment Agreement)
  • Electronic Federal Tax Payment System (EFTPS)
  • Your Online Account
  • Tax Withholding Estimator
  • Estimated Taxes
  • Where's My Refund
  • What to Expect
  • Direct Deposit
  • Reduced Refunds
  • Amend Return

Credits & Deductions

  • INFORMATION FOR...
  • Businesses & Self-Employed
  • Earned Income Credit (EITC)
  • Child Tax Credit
  • Clean Energy and Vehicle Credits
  • Standard Deduction
  • Retirement Plans

Forms & Instructions

  • POPULAR FORMS & INSTRUCTIONS
  • Form 1040 Instructions
  • Form 4506-T
  • POPULAR FOR TAX PROS
  • Form 1040-X
  • Circular 230

IRS makes Direct File a permanent option to file federal tax returns; expanded access for more taxpayers planned for the 2025 filing season

More in news.

  • Topics in the News
  • News Releases for Frequently Asked Questions
  • Multimedia Center
  • Tax Relief in Disaster Situations
  • Inflation Reduction Act
  • Taxpayer First Act
  • Tax Scams/Consumer Alerts
  • The Tax Gap
  • Fact Sheets
  • IRS Tax Tips
  • e-News Subscriptions
  • IRS Guidance
  • Media Contacts
  • IRS Statements and Announcements

IR-2024-151, May 30, 2024

WASHINGTON — Following a successful filing season pilot and feedback from a variety of partners, the Internal Revenue Service announced today that it will make Direct File a permanent option for filing federal tax returns starting in the 2025 tax season.

The agency is exploring ways to expand Direct File to make more taxpayers eligible in the 2025 filing season and beyond by examining options to broaden Direct File’s availability across the nation, including covering more tax situations and inviting all states to partner with Direct File next year.

The IRS plans to announce additional details on the 2025 expansion in the coming months.

The decision follows a highly successful, limited pilot during the 2024 filing season, where 140,803 taxpayers in 12 states filed their taxes using Direct File. The IRS closely analyzed data collected during the pilot, held numerous meetings with diverse groups of stakeholders and gathered feedback from individual Direct File users, state officials and representatives across the tax landscape. The IRS heard directly from hundreds of organizations across the country, more than a hundred members of Congress and from those interested in using Direct File in the future. The IRS has also heard from a limited number of stakeholders who believe the current free electronic filing options provided by third party vendors are adequate.

The IRS will continue data analysis and stakeholder engagement to identify improvements to Direct File; however, initial post-pilot analysis yielded enough information for the decision to make Direct File a permanent filing option. The IRS noted that an early decision on 2025 was critical for planning and programming both for the IRS and for additional states to join the program. IRS Commissioner Danny Werfel recommended to Secretary of the Treasury Janet L. Yellen to make Direct File permanent. He cited overwhelming satisfaction from users and improved ease of tax filing among the reasons for his recommendation, which Secretary Yellen has accepted.

“The clear message is that many taxpayers across the nation want the IRS to provide more than one no-cost option for filing electronically,” said IRS Commissioner Danny Werfel. “So, starting with the 2025 filing season, the IRS will make Direct File a permanent option for filing federal tax returns. Giving taxpayers additional options strengthens the tax filing system. And adding Direct File to the menu of filing options fits squarely into our effort to make taxes as easy as possible for Americans, including saving time and money.”

State and eligibility expansion

Building on the success of the limited pilot – where taxpayers with relatively simple tax situations in 12 states were eligible to use Direct File – the IRS is examining ways to expand eligibility to more taxpayers across the country. For the 2025 filing season, the IRS will work with all states that want to partner with Direct File, and there will be no limit to the number of states that can participate in the coming year. The agency expects several new states will choose to participate.

The IRS is also exploring ways to gradually expand the scope of tax situations supported by Direct File. Over the coming years, the agency’s goal is to expand Direct File to support most common tax situations, with a particular focus on those situations that impact working families. Announcements about new state partners and expanded eligibility are expected in the coming months.

“User experience – both within the product and integration with state tax systems – will continue to be the foundation for Direct File moving forward,” Werfel said. “We will focus, first and foremost, on continuing to get it right. Accuracy and comprehensive tax credit uptake will be paramount concerns to ensure taxpayers file a correct return and get the refund they’re entitled to. And our North Star will be improving the experience of tax filing itself and helping taxpayers meet their obligations as easily and quickly as possible.”

Direct File’s role in the tax system

During the agency’s review, many taxpayers told the IRS they want no-cost filing options. Millions of taxpayers who did not live in one of the 12 pilot states visited the Direct File website to learn more about this option or asked live chat assistors to make Direct File available in their state.

As a permanent filing option, Direct File will continue to be one option among many from which taxpayers can choose. It is not meant to replace other important options by tax professionals or commercial software providers, who are critical partners with the IRS in delivering a successful tax system for the nation. The IRS also remains committed to the ongoing relationship with Free File Inc., which has served taxpayers for two decades in the joint effort to provide free commercial software. Earlier this month, the IRS signed a five-year extension with industry to continue Free File.

As the IRS works to expand Direct File, it will also work to strengthen all free filing options for taxpayers, including Free File, the Volunteer Income Tax Assistance program (VITA) and the Tax Counseling for the Elderly program (TCE).

Pilot analysis and feedback

In the six weeks following the close of the Direct File pilot, the IRS closely analyzed pilot data and gathered feedback from diverse groups of stakeholders, including Direct File users, state officials and representatives across the country’s tax community.

While data analysis and partner engagement are ongoing, the IRS’ post-pilot analysis has yielded three conclusions that support making Direct File a permanent tax filing solution:

1. Taxpayers overwhelmingly liked using Direct File

As detailed in the IRS Direct File Pilot: Filing Season 2024 After Action Report PDF , more than 15,000 Direct File users participated in the General Services Administration’s Touchpoints survey, which collects comprehensive user feedback about government systems:

  • 90% of respondents ranked their experience as Excellent or Above Average.
  • When asked what they particularly liked, respondents most commonly cited Direct File’s ease of use, trustworthiness and that it was free.
  • Additionally, 86% of respondents said that their experience with Direct File increased their trust in the IRS.
  • 90% of survey respondents who used customer support rated that experience as Excellent or Above Average.

For the primary quantitative measure of taxpayer opinions of Direct File, the IRS selected the Net Promoter Score (NPS) customer sentiment metric, which asks users, “On a scale from 0 to 10, how likely are you to recommend Direct File to a friend or family member?” NPS scores range from -100-+100. Direct File has a NPS of +74. If compared to benchmark scores from financial services companies, Direct File would lead in eight of nine categories.

2. Direct File made the tax filing experience easier

Direct File’s users reported saving time: Filing taxes with Direct File generally took less than an hour, and many reported filing in as little as 30 minutes. Nearly half of Direct File users reported paying for tax preparation the previous year, and the Treasury Department estimates that Direct File users saved $5.6 million in tax preparation fees this filing season.

3. Direct File helps catalyze the IRS’s digital transformation

To build Direct File, the IRS assembled a team of experienced tax experts, digital product specialists, engineers and data scientists from across the federal government. The agency partnered with the U.S. Digital Service and GSA’s 18F, as well as private sector partners, who all brought critical agile technology expertise. Working side by side at IRS headquarters and collaborating with remote team members across the country, the Direct File team developed and delivered a strong technology product.

The Direct File pilot also gave the IRS the chance to test customer service innovations on a large scale.

Live Chat served as Direct File’s primary customer support channel because it could be integrated directly into the product. This allowed customer support to gradually expand in concert with the overall number of users in each phase of the pilot. The IRS is exploring how this approach could impact taxpayer service overall as the agency works to provide taxpayers with more choices in how they can interact with the IRS.

“We’re mindful that the most important decision we made during the pilot was to focus on executional certainty,” Werfel said. “We took the time to get it right. We found the right first step to test the demand and the user experience and build a strong product. We will apply that same critical lesson for next year as we take a strategic approach to expanding Direct File’s availability and capabilities.”

  •  Facebook
  •  Twitter
  •  Linkedin
  • Share full article

Advertisement

Supported by

U.S. Plans to Sue Ticketmaster Owner, Accusing It of Defending a Monopoly

Live Nation Entertainment, the concert giant that owns Ticketmaster, faces a fight that could reshape the multibillion-dollar live music industry.

Taylor Swift during a concert performance.

By David McCabe and Ben Sisario

David McCabe reports on tech policy from Washington. Ben Sisario reports on the music industry from New York.

The Justice Department and a group of states plan to sue Live Nation Entertainment, the concert giant that owns Ticketmaster, as soon as Thursday, accusing it of illegally maintaining a monopoly in the live entertainment industry, said three people familiar with the matter.

The government plans to argue in a lawsuit that Live Nation shored up its power through Ticketmaster’s exclusive ticketing contracts with concert venues, as well as the company’s dominance over concert tours and other businesses like venue management, said two of the people, who declined to be named because the lawsuit was still private. That helped the company maintain a monopoly, raising prices and fees for consumers, limiting innovation in the ticket industry and hurting competition, the people said.

The government will argue that tours promoted by the company were more likely to play venues where Ticketmaster was the exclusive ticket service, one of the people said, and that Live Nation’s artists played venues that it owns.

Live Nation is a colossus of the concert world and a force in the lives of musicians and fans alike. Its scale and reach far exceed those of any competitor, encompassing concert promotion, ticketing, artist management and the operation of hundreds of venues and festivals around the world.

The Ticketmaster division alone sells 600 million tickets a year to events around the world. According to some estimates, it handles ticketing for 70 percent to 80 percent of major concert venues in the United States.

Lawmakers, fans and competitors have accused the company of engaging in practices that harm rivals and drive up ticket prices and fees. At a congressional hearing early last year, prompted by a Taylor Swift tour presale on Ticketmaster that left millions of people unable to buy tickets, senators from both parties called Live Nation a monopoly .

The company has denied that it sets high prices and fees, saying artists and other parties like major venues are responsible.

A spokeswoman for the Justice Department and a spokeswoman for Live Nation declined to comment. Bloomberg News earlier reported that the lawsuit was imminent. The lawsuit is expected to be filed in the Southern District of New York, two of the people familiar with the matter said.

In recent years, American regulators have sued other major companies, testing century-old antitrust laws against new power wielded by major companies over consumers. The Justice Department sued Apple in March, arguing the company has made it difficult for customers to ditch its devices, and has already brought two cases arguing Google violated antitrust laws. The Federal Trade Commission last year filed an antitrust lawsuit against Amazon for harming sellers on its platform and is pursuing another against Meta , in part for its acquisitions of Instagram, Facebook and WhatsApp.

The Justice Department allowed Live Nation, the world’s largest concert promoter, to buy Ticketmaster in 2010 under certain conditions laid out in a legal agreement. If venues did not use Ticketmaster, for example, Live Nation could not threaten to pull concert tours.

In 2019, however, the Justice Department found that Live Nation had violated those terms and modified and extended the agreement.

The Justice Department’s latest investigation of Live Nation began in 2022. Live Nation simultaneously ramped up its lobbying efforts, spending $2.4 million on federal lobbying in 2023, up from $1.1 million in 2022, according to filings available through the nonpartisan website OpenSecrets.

In April, the company co-hosted a lavish party in Washington ahead of the annual White House Correspondents’ Association dinner that featured a performance by the country singer Jelly Roll and cocktail napkins that displayed positive facts about Live Nation’s impact on the economy, like the billions it says it pays to artists.

Under pressure from the White House, Live Nation said in June that it would begin to show prices for shows at venues it owned that included all charges, including extra fees. The Federal Trade Commission has proposed a rule that would ban hidden fees.

A former chairman of the commission, Bill Kovacic, said Wednesday that a lawsuit against the company would be a rebuke of earlier antitrust officials who had allowed the company to grow to its current size.

“It’s another way of saying earlier policy failed and failed badly,” he said.

David McCabe covers tech policy. He joined The Times from Axios in 2019. More about David McCabe

Ben Sisario covers the music industry. He has been writing for The Times since 1998. More about Ben Sisario

Inside the Biden Administration

Here’s the latest news and analysis from washington..

War in Ukraine:  President Biden, in a major shift pressed by his advisers and key allies, has authorized Ukraine to conduct limited strikes inside Russia with American-made weapons .

Farm Subsidies:  The Department of Agriculture aims to better support small farmers while still aiding big operations and rewarding climate-friendly practices. It’s a tall order .

Carbon Offsets:  The Biden administration laid out for the first time a set of broad government guidelines around the use of carbon offsets  in an attempt to shore up confidence in the much-criticized method for tackling global warming.

Live Nation:  The Justice Department is suing Live Nation Entertainment , the owner of Ticketmaster, asking a court to break up the company over claims it illegally maintained a monopoly in the live entertainment industry.

Hidden Fees:  Biden’s effort to crack down on “junk fees”  from airlines and credit-card companies is doubling as a war against inflation.

IMAGES

  1. PHD file extension

    phd file analyzer

  2. The Best PhD Student Computer Files Organisation

    phd file analyzer

  3. IFC File Analyzer

    phd file analyzer

  4. Download File Analyzer

    phd file analyzer

  5. Phd File 25 pack Black Washable 100/100

    phd file analyzer

  6. File Analyzer Admin Guide

    phd file analyzer

VIDEO

  1. Solucion sXe 17.2 Error al interceptar !

  2. 4.3.5 Mastering Linux File Management: A Security Analyst's Guide🚀🔐 #hacker

  3. deepak revision paper class 12th maths solution |model test paper 7(part 2) |hbse sample paper 2024

  4. Zwiftalizer 2.0 In Depth #zwift #tutorial #zwiftcycling #zwiftgroupride #zwiftracing

  5. How to Parse Single Applications with Digital Forensics Tools in Physical Analyzer

  6. DU PHD ADMISSION 2023-24| DELHI UNIVERSITY PHASE III ADMISSION| DELHI UNIVERSITY NEW PHDNOTIFICATION

COMMENTS

  1. PHD2 Log Viewer

    PHD2 is guiding software inspired by Stark Labs PHD Guiding. PHD2 is free of cost, open source, and community-developed and supported. Download v2.6.13 macOS Sonoma+ ... Download; Documentation; Getting Help; About; PHD2 Log Viewer. Andy Galasso has written this PHD2 Log File viewer for quickly visualizing your guiding performance and spotting ...

  2. Installing DTJF on Eclipse Memory Analyzer to read .phd files

    I have Eclipse Memory Analyzer v1.3.1, and need to analyze some .phd heap dumps. According to this question, it is necessary to install DTJF on Eclipse Memory Analyzer.. This link in the question says: Memory Analyzer can also read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files. For this purpose one just has to install the IBM DTFJ feature into Memory ...

  3. Heap dump

    The best method to analyze a PHD heap dump is to use the Eclipse Memory Analyzer™ tool (MAT) or the IBM Memory Analyzer tool. These tools process the dump file and provide a visual representation of the objects in the Java Heap. ... The body of a PHD file is indicated by a byte that contains a tag with a value of 2, after which there are a ...

  4. How to analyse a .phd heap dump from an IBM JVM

    In the list below, an item should appear called IBM Monitoring and Diagnostic Tools. Tick the box next to it, click Next, and follow the wizard to accept the license agreements and install the toolkit. Restart Eclipse when prompted. Choose File -> Open Heap Dump and choose your .phd file. It should open in MAT and allow you to figure out who is ...

  5. Creating and Analyzing Java Heap Dumps

    the Portable Heap Dump (PHD) format. PHD is the default format. The classic format is human-readable since it is in ASCII text, but the PHD format is binary and should be processed by appropriate tools for analysis. ... We will first start the Memory Analyzer Tool and open the heap dump file. In Eclipse MAT, two types of object sizes are ...

  6. PHD2 Log Viewer

    Change Log. Open the Quick Help item on the Help menu to get brief description of how to navigate the log with the mouse. Guide log plot. Calibration plot. Settling frames after dither are automatically excluded from statistics calculation. You can also manually select ranges of frames with the mouse to exclude from the statistics.

  7. - PHD2 Guiding

    PHD2 is telescope guiding software that simplifies the process of tracking a guide star, letting you concentrate on other aspects of deep-sky imaging or spectroscopy. Easy-to-use, "push here dummy" guiding for beginners. Sophisticated guiding and analysis tools for experienced users. Extensive support for commonly-used equipment.

  8. GitHub

    View all files. Repository files navigation. README; GPL-3.0 license; PHD2 Log Viewer PHD2 Log Viewer is a tool for quickly visualizing your guiding performance and spotting problems in your PHD2 Guide Log. Open the Quick Help item on the Help menu to get brief description of how to navigate the log with the mouse. Andy Galasso <andy.galasso ...

  9. Memory Analyzer

    Eclipse Memory Analyzer - Standalone Installation In this tutorial let's see how to Download and start working on Eclipse Memory Analyzer - Standalone version. Open a java heap dump created out of sun/oracle jdk (*.hprof) and a heap dump created out of IBM jdk (*.phd) files. Step 1: Search for "eclipse memory analyzer" and download…

  10. Help understanding PHD2 log viewer

    PHD2 produces a log file whenever you have a guiding session. It generally has a name something like this PHD2_GuideLog_2021-09-19_214323.txt. This is the file that PHD2 log viewer will use to produce the analysis of your guiding session. So after your guiding session, run PHD2 log viewer and then go to the file menu, select the open option and ...

  11. Heap dump

    The best method to analyze a PHD heap dump is to use the Eclipse Memory Analyzer™ tool (MAT) or the IBM Memory Analyzer tool. These tools process the dump file and provide a visual representation of the objects in the Java Heap. ... The body of a PHD file is indicated by a byte that contains a tag with a value of 2, after which there are a ...

  12. MemoryAnalyzer

    System Dumps and Heap Dumps from IBM Virtual Machines. Memory Analyzer may read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files with the IBM DTFJ feature installed. Once installed, then File > Open Heap Dump should give the following options for the file types: . All known formats

  13. Help in interpreting PHD2 Guiding graphs & log file?

    Guiding based on the mount movements rather than seeing occupies a large part of the tuning of PHD2. The noise in the motion caused by seeing is spread across the frequency spectrum whereas mount drift and PE are low frequency. So just about every setting is a kind of low pass filter to remove much of the seeing noise.

  14. Tutorial: Analyzing PHD2 Guiding Results

    Tutorial: Analyzing PHD2 Guiding Results. A tutorial on how to interpret your Guide Log and improve your guiding performance, by Bruce Waddington. Highly recommended! Download PDF English Français Italiano 日本. December 22, 2023 -. December 21, 2019 -.

  15. PDF Analyzing PHD2 Guiding Results

    Let PHD2 auto-select the guide star (Alt-s). It can be hard to visually distinguish a hot pixel from a faint guide star when you're just peering at the display. Be sure you're using either a dark library or a bad-pixel map. Apply a 2x2 or even 3x3 noise reduction filter (brain dialog/camera tab).

  16. Bootstrap

    The heap dump file should end with the .hprof extension (e.g. java_pid3158.hprof). Note: It is important to recognize that the heap dump file must have file extension HPROF. By default, a WebSphere heap dump has file extenion PHD (Portable Heap Dump), which means that by default, a WebSphere heap dump cannot be analyzed by Eclipse Memory Analyzer.

  17. PHD File: How to open PHD file (and what it is)

    Various apps that use files with this extension. These apps are known to open certain types of PHD files. Remember, different programs may use PHD files for different purposes, so you may need to try out a few of them to be able to open your specific file. Windows. CyberLink PhotoDirector. User submitted. MTerm Terminal Server. User submitted.

  18. PDF Necessary level of resources for the functioning and clean ...

    7. According to the same analysis, the projected number of such requests for issuance is between 700 and 850.3 The narrow range of these figures can be attributed to the clear definition of possible use of the CERs towards achieving first and first updated nationally determined contributions and for voluntary cancellation purposes. 8.

  19. How to analyse Websphere core*.dmp file and Snap*.trc files?

    Eclipse Memory Analyzer is an excellent tool for analyzing the core*.dmp (and portable heap dumps, i.e. .phd files, too). To read those Websphere Dumps an additional plugin called IBM Diagnostic Tool Framework for Java needs to be installed (Update Site).

  20. What Happens Now That Trump Has Been Convicted ...

    Trump Has Been Convicted. Here's What Happens Next. Donald J. Trump has promised to appeal, but he may face limits on his ability to travel and to vote as he campaigns for the White House. There ...

  21. Manual

    Manual - PHD2 Guiding. PHD2 is guiding software inspired by Stark Labs PHD Guiding. PHD2 is free of cost, open source, and community-developed and supported. Download v2.6.9. macOS 64-bitDownload v2.6.9. for Windows.

  22. IRS makes Direct File a permanent option to file federal tax returns

    The IRS will continue data analysis and stakeholder engagement to identify improvements to Direct File; however, initial post-pilot analysis yielded enough information for the decision to make Direct File a permanent filing option. The IRS noted that an early decision on 2025 was critical for planning and programming both for the IRS and for ...

  23. Discord Is Still Used as Attack Vector

    Hackers often use Discord to push malware, share malicious links, and sometimes host dangerous files. Bitdefender Mobile Security gives its users the power to intercept malicious links and messages from Discord, so we took a closer look at what our Chat Protection technology caught in its net in the past six months.

  24. Download PHD2

    v2.6.13 22 December 2023 fix a problem where the camera settings are not displayed properly on some computers; Download v2.6.13 for Windows Download v2.6.13 macOS Sonoma+ Download v2.6.13 macOS Ventura and older Download v2.6.13 macOS 32-bit

  25. U.S. Plans to Sue Ticketmaster Owner, Accusing It of Defending a

    Ben Sisario reports on the music industry from New York. May 22, 2024. The Justice Department and a group of states plan to sue Live Nation Entertainment, the concert giant that owns Ticketmaster ...

  26. PDF PHD2 v2.6.13 User Guide

    Trouble-shooting and Analysis General Guide to Resolving Problems Large/Abrupt Guide Star Deflections ... PHD2 is the second generation of Craig Stark's original PHD application. PHD became a fixture of the amateur astronomy community with more than a ... of this information is captured in the log files and displayed in the various graphical ...