By default, the Dgraph is allowed to use up to 80% of the RAM available on its host machine. This prevents it from running into out-of-memory performance issues. The Dgraph also uses a considerable amount of virtual memory, which it needs for ingesting data and executing queries. This is an expected behavior and can be observed with system diagnostic tools.
If the Dgraph reaches a memory consumption limit, it will cancel queries, beginning with the one consuming the most memory. Each time the Dgraph cancels a query, it logs the amount of memory the query was using and the time it was cancelled for diagnostic purposes.
The Dgraph retains the physical memory it's using indefinitely, unless it's running on the same node as other memory-intensive processes, in which case it will release a significant portion fairly quickly. Because of this, depending on your requirements and available resources, you may want to host the Dgraph on dedicated nodes.
In some cases, you will be required to host the Dgraph on nodes with other processes; for example, if your databases are on HDFS, the Dgraph must be hosted on HDFS DataNodes. Oracle recommends limiting the number of processes running on these nodes. In particular, you shouldn't host the Dgraph on the same node as Spark. You should also use Linux cgroups (control groups) to ensure the Dgraph has access to the resources it requires; for more information, see Setting up cgroups for the Dgraph.
You can also set a custom limit on the amount of memory the Dgraph can consume using the --memory limit flag. For more information, see Changing the Dgraph memory limit.