Running any command returns “Cannot allocate memory” on Ubuntu Server
I’m using Ubuntu 14.04. Recently, when I login via SSH with my user with sudo privileges, every command I run results in a “Cannot allocate memory” error. Here are a few I tried at my console
myuser@mymachine:~$ whoami
-bash: fork: Cannot allocate memory
myuser@mymachine:~$ uname -a
-bash: fork: Cannot allocate memory
Even if I try sudo reboot now
I get the above error, so I don’t know what else I can try to unlock my instance. The host is DigitalOcean if that matters.
Edit: Per the answer/suggestion given here is the output of "free"
myuser@mymachine:~$ free
-bash: fork: Cannot allocate memory
server bash ram reboot memory-usage
add a comment |
I’m using Ubuntu 14.04. Recently, when I login via SSH with my user with sudo privileges, every command I run results in a “Cannot allocate memory” error. Here are a few I tried at my console
myuser@mymachine:~$ whoami
-bash: fork: Cannot allocate memory
myuser@mymachine:~$ uname -a
-bash: fork: Cannot allocate memory
Even if I try sudo reboot now
I get the above error, so I don’t know what else I can try to unlock my instance. The host is DigitalOcean if that matters.
Edit: Per the answer/suggestion given here is the output of "free"
myuser@mymachine:~$ free
-bash: fork: Cannot allocate memory
server bash ram reboot memory-usage
add a comment |
I’m using Ubuntu 14.04. Recently, when I login via SSH with my user with sudo privileges, every command I run results in a “Cannot allocate memory” error. Here are a few I tried at my console
myuser@mymachine:~$ whoami
-bash: fork: Cannot allocate memory
myuser@mymachine:~$ uname -a
-bash: fork: Cannot allocate memory
Even if I try sudo reboot now
I get the above error, so I don’t know what else I can try to unlock my instance. The host is DigitalOcean if that matters.
Edit: Per the answer/suggestion given here is the output of "free"
myuser@mymachine:~$ free
-bash: fork: Cannot allocate memory
server bash ram reboot memory-usage
I’m using Ubuntu 14.04. Recently, when I login via SSH with my user with sudo privileges, every command I run results in a “Cannot allocate memory” error. Here are a few I tried at my console
myuser@mymachine:~$ whoami
-bash: fork: Cannot allocate memory
myuser@mymachine:~$ uname -a
-bash: fork: Cannot allocate memory
Even if I try sudo reboot now
I get the above error, so I don’t know what else I can try to unlock my instance. The host is DigitalOcean if that matters.
Edit: Per the answer/suggestion given here is the output of "free"
myuser@mymachine:~$ free
-bash: fork: Cannot allocate memory
server bash ram reboot memory-usage
server bash ram reboot memory-usage
edited Feb 24 '18 at 5:37
Chai T. Rex
4,18711536
4,18711536
asked Nov 3 '16 at 20:01
DaveDave
3603618
3603618
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
Solution
As it says in the error messages, your machine has run out of memory. This can be for a number of reasons, but basically, something is eating up all of your memory and not leaving any left for even basic command usage.
I would suggest that you reboot your droplet (just go to your client control panel and select "Reboot"), ssh
in and then run top
or htop
. Keep an eye on the memory usage and see what process is using up all the memory. From there, try either
- Killing/Removing the faulty program/process
WARNING: PLEASE do your research on if the process is an essential system process, first! If a system process is causing memory issues, don't just kill it, do research on it and for specific ways to deal with it. - Changing configuration for that program/process so that it doesn't eat up all of your memory.
Suggestions for preventing the issue from happening again
- Something good to do is to add swap memory, as it allocates more memory if you're running out.
- Whenever you install programs, make sure you configure them correctly so that they don't perform in unintended ways (like eating up memory)
- After each time you add a package or basically anything new is configured, check with
htop
ortop
to see how much memory you're using up with the current programs. If you notice that you're using almost all of it, try and clear some out by going through and removing unnecessary programs/processes. - If there is anything that's being auto-started (besides system processes, of course!) that you don't recognize or want to be auto-started, remove it! But always do your research on what a process is before killing/deleting it, as it could be essential for bootup procedures or system functions, etc.
add a comment |
To get out of this condition without rebooting, you can trigger the OOM killer manually as follows:
echo 1 > /proc/sys/kernel/sysrq
echo f > /proc/sysrq-trigger
echo 0 > /proc/sys/kernel/sysrq
Reference
- How does the OOM killer decide which process to kill first?
Do you have any documentation supporting those commands? Why not justsudo sysctl -w vm.oom_kill_allocating_task=1
or permanently on/etc/sysctl.conf
.
– Pablo Bianchi
Feb 27 at 16:09
Doesn't sound like that would make a difference, the system does not hit an actual OOM condition if this happens at rest because no process is trying to allocate memory and no additional processes can be started. And semi-unrelated but you wouldn't be able to use sudo or sysctl once in this state.
– Luke F
Feb 27 at 21:59
add a comment |
In completion to the accepted answer there is one additional thing to consider: Your system may run out of file handles or even socket buffers and still have lots of memory whilst giving the same error. This is especially true if the shared hosting imposes limits of such nature. On OpenVZ systems, watch the contents of
# cat /proc/user_beancounters
This will give you in the rightmost column the first overruns. If this is true, either move to a larger hosting package or hunt down the most likely culprit: the mysql or mariadb database which may, in presence of a defective PHP app, leak file handles to the hundreds per second.
This may also happen if your webserver has ssh open to the internet and accepts username / password logins: even if you have fail2ban running, you may have attracted a distributed dictionary break in attempt which also consumes a lot of resources.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "89"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f845233%2frunning-any-command-returns-cannot-allocate-memory-on-ubuntu-server%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Solution
As it says in the error messages, your machine has run out of memory. This can be for a number of reasons, but basically, something is eating up all of your memory and not leaving any left for even basic command usage.
I would suggest that you reboot your droplet (just go to your client control panel and select "Reboot"), ssh
in and then run top
or htop
. Keep an eye on the memory usage and see what process is using up all the memory. From there, try either
- Killing/Removing the faulty program/process
WARNING: PLEASE do your research on if the process is an essential system process, first! If a system process is causing memory issues, don't just kill it, do research on it and for specific ways to deal with it. - Changing configuration for that program/process so that it doesn't eat up all of your memory.
Suggestions for preventing the issue from happening again
- Something good to do is to add swap memory, as it allocates more memory if you're running out.
- Whenever you install programs, make sure you configure them correctly so that they don't perform in unintended ways (like eating up memory)
- After each time you add a package or basically anything new is configured, check with
htop
ortop
to see how much memory you're using up with the current programs. If you notice that you're using almost all of it, try and clear some out by going through and removing unnecessary programs/processes. - If there is anything that's being auto-started (besides system processes, of course!) that you don't recognize or want to be auto-started, remove it! But always do your research on what a process is before killing/deleting it, as it could be essential for bootup procedures or system functions, etc.
add a comment |
Solution
As it says in the error messages, your machine has run out of memory. This can be for a number of reasons, but basically, something is eating up all of your memory and not leaving any left for even basic command usage.
I would suggest that you reboot your droplet (just go to your client control panel and select "Reboot"), ssh
in and then run top
or htop
. Keep an eye on the memory usage and see what process is using up all the memory. From there, try either
- Killing/Removing the faulty program/process
WARNING: PLEASE do your research on if the process is an essential system process, first! If a system process is causing memory issues, don't just kill it, do research on it and for specific ways to deal with it. - Changing configuration for that program/process so that it doesn't eat up all of your memory.
Suggestions for preventing the issue from happening again
- Something good to do is to add swap memory, as it allocates more memory if you're running out.
- Whenever you install programs, make sure you configure them correctly so that they don't perform in unintended ways (like eating up memory)
- After each time you add a package or basically anything new is configured, check with
htop
ortop
to see how much memory you're using up with the current programs. If you notice that you're using almost all of it, try and clear some out by going through and removing unnecessary programs/processes. - If there is anything that's being auto-started (besides system processes, of course!) that you don't recognize or want to be auto-started, remove it! But always do your research on what a process is before killing/deleting it, as it could be essential for bootup procedures or system functions, etc.
add a comment |
Solution
As it says in the error messages, your machine has run out of memory. This can be for a number of reasons, but basically, something is eating up all of your memory and not leaving any left for even basic command usage.
I would suggest that you reboot your droplet (just go to your client control panel and select "Reboot"), ssh
in and then run top
or htop
. Keep an eye on the memory usage and see what process is using up all the memory. From there, try either
- Killing/Removing the faulty program/process
WARNING: PLEASE do your research on if the process is an essential system process, first! If a system process is causing memory issues, don't just kill it, do research on it and for specific ways to deal with it. - Changing configuration for that program/process so that it doesn't eat up all of your memory.
Suggestions for preventing the issue from happening again
- Something good to do is to add swap memory, as it allocates more memory if you're running out.
- Whenever you install programs, make sure you configure them correctly so that they don't perform in unintended ways (like eating up memory)
- After each time you add a package or basically anything new is configured, check with
htop
ortop
to see how much memory you're using up with the current programs. If you notice that you're using almost all of it, try and clear some out by going through and removing unnecessary programs/processes. - If there is anything that's being auto-started (besides system processes, of course!) that you don't recognize or want to be auto-started, remove it! But always do your research on what a process is before killing/deleting it, as it could be essential for bootup procedures or system functions, etc.
Solution
As it says in the error messages, your machine has run out of memory. This can be for a number of reasons, but basically, something is eating up all of your memory and not leaving any left for even basic command usage.
I would suggest that you reboot your droplet (just go to your client control panel and select "Reboot"), ssh
in and then run top
or htop
. Keep an eye on the memory usage and see what process is using up all the memory. From there, try either
- Killing/Removing the faulty program/process
WARNING: PLEASE do your research on if the process is an essential system process, first! If a system process is causing memory issues, don't just kill it, do research on it and for specific ways to deal with it. - Changing configuration for that program/process so that it doesn't eat up all of your memory.
Suggestions for preventing the issue from happening again
- Something good to do is to add swap memory, as it allocates more memory if you're running out.
- Whenever you install programs, make sure you configure them correctly so that they don't perform in unintended ways (like eating up memory)
- After each time you add a package or basically anything new is configured, check with
htop
ortop
to see how much memory you're using up with the current programs. If you notice that you're using almost all of it, try and clear some out by going through and removing unnecessary programs/processes. - If there is anything that's being auto-started (besides system processes, of course!) that you don't recognize or want to be auto-started, remove it! But always do your research on what a process is before killing/deleting it, as it could be essential for bootup procedures or system functions, etc.
edited Mar 24 '17 at 19:56
answered Nov 3 '16 at 20:55
Owen HinesOwen Hines
2,45311034
2,45311034
add a comment |
add a comment |
To get out of this condition without rebooting, you can trigger the OOM killer manually as follows:
echo 1 > /proc/sys/kernel/sysrq
echo f > /proc/sysrq-trigger
echo 0 > /proc/sys/kernel/sysrq
Reference
- How does the OOM killer decide which process to kill first?
Do you have any documentation supporting those commands? Why not justsudo sysctl -w vm.oom_kill_allocating_task=1
or permanently on/etc/sysctl.conf
.
– Pablo Bianchi
Feb 27 at 16:09
Doesn't sound like that would make a difference, the system does not hit an actual OOM condition if this happens at rest because no process is trying to allocate memory and no additional processes can be started. And semi-unrelated but you wouldn't be able to use sudo or sysctl once in this state.
– Luke F
Feb 27 at 21:59
add a comment |
To get out of this condition without rebooting, you can trigger the OOM killer manually as follows:
echo 1 > /proc/sys/kernel/sysrq
echo f > /proc/sysrq-trigger
echo 0 > /proc/sys/kernel/sysrq
Reference
- How does the OOM killer decide which process to kill first?
Do you have any documentation supporting those commands? Why not justsudo sysctl -w vm.oom_kill_allocating_task=1
or permanently on/etc/sysctl.conf
.
– Pablo Bianchi
Feb 27 at 16:09
Doesn't sound like that would make a difference, the system does not hit an actual OOM condition if this happens at rest because no process is trying to allocate memory and no additional processes can be started. And semi-unrelated but you wouldn't be able to use sudo or sysctl once in this state.
– Luke F
Feb 27 at 21:59
add a comment |
To get out of this condition without rebooting, you can trigger the OOM killer manually as follows:
echo 1 > /proc/sys/kernel/sysrq
echo f > /proc/sysrq-trigger
echo 0 > /proc/sys/kernel/sysrq
Reference
- How does the OOM killer decide which process to kill first?
To get out of this condition without rebooting, you can trigger the OOM killer manually as follows:
echo 1 > /proc/sys/kernel/sysrq
echo f > /proc/sysrq-trigger
echo 0 > /proc/sys/kernel/sysrq
Reference
- How does the OOM killer decide which process to kill first?
edited Feb 27 at 15:44
Pablo Bianchi
2,97021535
2,97021535
answered Feb 2 at 15:54
Luke FLuke F
111
111
Do you have any documentation supporting those commands? Why not justsudo sysctl -w vm.oom_kill_allocating_task=1
or permanently on/etc/sysctl.conf
.
– Pablo Bianchi
Feb 27 at 16:09
Doesn't sound like that would make a difference, the system does not hit an actual OOM condition if this happens at rest because no process is trying to allocate memory and no additional processes can be started. And semi-unrelated but you wouldn't be able to use sudo or sysctl once in this state.
– Luke F
Feb 27 at 21:59
add a comment |
Do you have any documentation supporting those commands? Why not justsudo sysctl -w vm.oom_kill_allocating_task=1
or permanently on/etc/sysctl.conf
.
– Pablo Bianchi
Feb 27 at 16:09
Doesn't sound like that would make a difference, the system does not hit an actual OOM condition if this happens at rest because no process is trying to allocate memory and no additional processes can be started. And semi-unrelated but you wouldn't be able to use sudo or sysctl once in this state.
– Luke F
Feb 27 at 21:59
Do you have any documentation supporting those commands? Why not just
sudo sysctl -w vm.oom_kill_allocating_task=1
or permanently on /etc/sysctl.conf
.– Pablo Bianchi
Feb 27 at 16:09
Do you have any documentation supporting those commands? Why not just
sudo sysctl -w vm.oom_kill_allocating_task=1
or permanently on /etc/sysctl.conf
.– Pablo Bianchi
Feb 27 at 16:09
Doesn't sound like that would make a difference, the system does not hit an actual OOM condition if this happens at rest because no process is trying to allocate memory and no additional processes can be started. And semi-unrelated but you wouldn't be able to use sudo or sysctl once in this state.
– Luke F
Feb 27 at 21:59
Doesn't sound like that would make a difference, the system does not hit an actual OOM condition if this happens at rest because no process is trying to allocate memory and no additional processes can be started. And semi-unrelated but you wouldn't be able to use sudo or sysctl once in this state.
– Luke F
Feb 27 at 21:59
add a comment |
In completion to the accepted answer there is one additional thing to consider: Your system may run out of file handles or even socket buffers and still have lots of memory whilst giving the same error. This is especially true if the shared hosting imposes limits of such nature. On OpenVZ systems, watch the contents of
# cat /proc/user_beancounters
This will give you in the rightmost column the first overruns. If this is true, either move to a larger hosting package or hunt down the most likely culprit: the mysql or mariadb database which may, in presence of a defective PHP app, leak file handles to the hundreds per second.
This may also happen if your webserver has ssh open to the internet and accepts username / password logins: even if you have fail2ban running, you may have attracted a distributed dictionary break in attempt which also consumes a lot of resources.
add a comment |
In completion to the accepted answer there is one additional thing to consider: Your system may run out of file handles or even socket buffers and still have lots of memory whilst giving the same error. This is especially true if the shared hosting imposes limits of such nature. On OpenVZ systems, watch the contents of
# cat /proc/user_beancounters
This will give you in the rightmost column the first overruns. If this is true, either move to a larger hosting package or hunt down the most likely culprit: the mysql or mariadb database which may, in presence of a defective PHP app, leak file handles to the hundreds per second.
This may also happen if your webserver has ssh open to the internet and accepts username / password logins: even if you have fail2ban running, you may have attracted a distributed dictionary break in attempt which also consumes a lot of resources.
add a comment |
In completion to the accepted answer there is one additional thing to consider: Your system may run out of file handles or even socket buffers and still have lots of memory whilst giving the same error. This is especially true if the shared hosting imposes limits of such nature. On OpenVZ systems, watch the contents of
# cat /proc/user_beancounters
This will give you in the rightmost column the first overruns. If this is true, either move to a larger hosting package or hunt down the most likely culprit: the mysql or mariadb database which may, in presence of a defective PHP app, leak file handles to the hundreds per second.
This may also happen if your webserver has ssh open to the internet and accepts username / password logins: even if you have fail2ban running, you may have attracted a distributed dictionary break in attempt which also consumes a lot of resources.
In completion to the accepted answer there is one additional thing to consider: Your system may run out of file handles or even socket buffers and still have lots of memory whilst giving the same error. This is especially true if the shared hosting imposes limits of such nature. On OpenVZ systems, watch the contents of
# cat /proc/user_beancounters
This will give you in the rightmost column the first overruns. If this is true, either move to a larger hosting package or hunt down the most likely culprit: the mysql or mariadb database which may, in presence of a defective PHP app, leak file handles to the hundreds per second.
This may also happen if your webserver has ssh open to the internet and accepts username / password logins: even if you have fail2ban running, you may have attracted a distributed dictionary break in attempt which also consumes a lot of resources.
answered Feb 27 at 15:53
aquaherdaquaherd
5,3592336
5,3592336
add a comment |
add a comment |
Thanks for contributing an answer to Ask Ubuntu!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f845233%2frunning-any-command-returns-cannot-allocate-memory-on-ubuntu-server%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown