Home    |    View Topics    |    Search    |    Contact Us    |   



Category:   Application (File Transfer/Sharing)  >   Red Hat Gluster Storage Vendors:   Red Hat
Red Hat Storage 'rhcon-ceph' Command Line Parameter Password Lets Local Users View the Password
SecurityTracker Alert ID:  1037062
SecurityTracker URL:
CVE Reference:   CVE-2016-7062   (Links to External Site)
Date:  Oct 19 2016
Impact:   Disclosure of authentication information
Fix Available:  Yes  Vendor Confirmed:  Yes  
Version(s): Console Agent 2
Description:   A vulnerability was reported in Red Hat Storage. A local user can obtain passwords on the target system.

The 'rhcon-ceph' application supplies the password to 'rhscon-core' in plain text as a command line parameter. A local user can run the 'ps -ef' command to list processes and obtain the password.

Impact:   A local user can obtain the 'rhscon-core' password.
Solution:   Red Hat has issued a fix.

The Red Hat advisory is available at:

Vendor URL: (Links to External Site)
Cause:   Access control error
Underlying OS:  Linux (Red Hat Enterprise)
Underlying OS Comments:  7

Message History:   None.

 Source Message Contents

Subject:  [RHSA-2016:2082-01] Moderate: Red Hat Storage Console 2 security and bug fix update

Hash: SHA1

                   Red Hat Security Advisory

Synopsis:          Moderate: Red Hat Storage Console 2 security and bug fix update
Advisory ID:       RHSA-2016:2082-01
Product:           Red Hat Storage Console
Advisory URL:
Issue date:        2016-10-19
CVE Names:         CVE-2016-7062 

1. Summary:

An update is now available for Red Hat Storage Console 2 for Red Hat
Enteprise Linux 7.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Relevant releases/architectures:

Red Hat Storage Console Agent 2 - noarch
Red Hat Storage Console Installer 2 - noarch
Red Hat Storage Console Main 2 - noarch, x86_64

3. Description:

Red Hat Storage Console is a new Red Hat offering for storage
administrators that provides a graphical management platform for Red Hat
Ceph Storage 2. Red Hat Storage Console allows users to install, monitor,
and manage a Red Hat Ceph Storage cluster.

Security Fix(es):

* A flaw was found in the way authentication details were passed between
rhscon-ceph and rhscon-core. An authenticated, local attacker could use
this flaw to recover the cleartext password. (CVE-2016-7062)

Bug Fix(es):

* Previously, the PG was calculated on per pool basis instead of cluster
level. With this fix, automatic calculation of PGs is disabled and the Ceph
PG calculator is used to calculate the PG values per OSD to keep the
cluster in healthy state. (BZ#1366577, BZ#1375538)

* Issuing a command to compact its data store during a rolling upgrade
renders the Ceph monitors unresponsive. To avoid this behavior, skip the
command to compact the data store during a rolling upgrade. As a result,
the Ceph monitors are responsive.(BZ#1372481)

* Rolling upgrade fails when a custom cluster name other than 'ceph' is
used and causes the ceph-ansible play to abort. With this fix, include the
flags to indicate the cluster name, defaulting to 'ceph' when unspecified
and the Ansible playbook succeeds with custom cluster names.(BZ#1373919)

* Previously, pools list in Console displayed incorrect storage utilization
and capacity data due to multiple CRUSH hierarchies. With this fix, the
pools list in Console displays the correct storage utilization and capacity

* Previously, the CPU utilization chart displayed only the user processes
CPU utilization and omitted system CPU utilization. With this fix, the CPU
utilization chart displays the combined user and system CPU utilization

* A full-duplex channel is available for communication in both directions
simultaneously and hence the effective bandwidth is twice the actual
bandwidth. With this update, this has been modified, and the network
utilization is now calculated properly.(BZ#1366242)

* In the Host list page, incorrect chart data was displayed in utilization
charts. With this fix, the chart displays correct data. (BZ#1358270)

* Previously, Calamari failed to reflect the correct values for OSD status.
With this update, the issue has been fixed and the dashboard displays the
correct, real time OSD status.(BZ#1359129)

* Previously, the text on the Add Storage tab was confusing due to unclear
description regarding the storage type. With this fix, the text has been
updated and a short description about the pools and RBDs is provided to
ensure there is no ambiguity.(BZ#1365983)

* Previously, while importing a cluster with collocated journals, the
journal size used to incorrectly populate in the MongoDB database. With
this fix, the journal size and the journal path is displayed correctly in
the OSD summary of the Host OSDs tab.(BZ#1365998)

* Previously, the clusters list in the console incorrectly depicted IOPS in
units. With this fix, all the IOPS units are removed to correctly show the
IOPS in the numeric count.(BZ#1366048)

* While checking the cluster system performance, the selection of any
elapsed hour range inappropriately displayed tick marks on both the elapsed
hour(s) range. With this fix, the console displays system performance graph
with a tick mark only on the selected elapsed hour(s).(BZ#1366081)

* The journal device details did not synchronize as expected during pool
creation and importing cluster workflows. This behavior is now fixed to
fetch the actual device details for OSD journals and displays as expected
in the UI.(BZ#1342969)

All users of Red Hat Storage Console are advised to upgrade to these
updated packages, which fix these bugs and add this enhancement.

4. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

5. Bugs fixed (

1342969 - OSD journal details provides incorrect journal size
1346379 - Command line parameters exposed (too spurious) as well as passwords shown
1358267 - Wrong size and utilization of pool
1358270 - cpu utilization charts on Host list dashboard doesn't match reported values
1358461 - cpu utilization values reported by RHSC 2.0 are wrong
1358832 - Enable mongodb authentication
1359129 - Bad OSD status
1365983 - [RFE]Very confusing "Add Storage" UI organization
1365998 - Incoherent OSD journal size display in the UI
1366048 - Cluster list window shows incorrect performance unit
1366081 - Cluster Performance Graph Range Selection Popup Broken
1366242 - Network utilization is not calculated properly
1366577 - Wrong calculation of PGs peer OSD leads to cluster in HEALTH_WARN state with explanation "too many PGs per OSD (768 > max 300)"
1366620 - Node initalization fails with "loop" type of disks on node
1371496 - Network utilization doesn't work with SELinux in enforcing mode
1371848 - Installation of ceph-installer failing on RHEL 7.3 because of conflicts with file from package firewalld-filesystem
1372481 - [ceph-ansible] : rolling_update got hung in task 'compress the store as much as possible'
1373919 - [ceph-ansible] : rolling update will fail if cluster name is other than 'ceph'
1375538 - PG count for pool creation is hard set and calculated in a wrong way
1375972 - when cluster is expanded (new machine added), console doesn't warn admin about implications of associated recovery operation
1381681 - CVE-2016-7062 rhscon-ceph: password leak by command line parameter

6. Package List:

Red Hat Storage Console Agent 2:



Red Hat Storage Console Installer 2:



Red Hat Storage Console Main 2:




These packages are GPG signed by Red Hat for security.  Our key and
details on how to verify the signature are available from

7. References:

8. Contact:

The Red Hat security contact is <>. More contact
details at

Copyright 2016 Red Hat, Inc.
Version: GnuPG v1


RHSA-announce mailing list

Go to the Top of This SecurityTracker Archive Page

Home   |    View Topics   |    Search   |    Contact Us

This web site uses cookies for web analytics. Learn More

Copyright 2021, LLC