git.net

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[kolla][ceph] Cache OSDs didn't stay in the root=cache after ceph deployment.


Hi Eugen, thanks for your reply first.

I tested what you said, addeed "osd crush update on start = False" in the
pre-deploy config file (/etc/kolla/config/ceph.conf)
Then destroy & re-deploy again. Now the cache OSDs has stayed in the right
bucket after ceph deployment.
Really thanks for your advise, now everything works now.

Appreciate,
Eddie.

Eugen Block <eblock at nde.ag> æ?¼ 2019å¹´7æ??1æ?¥ é?±ä¸? ä¸?å??8:40寫é??ï¼?

> Hi,
>
> although I'm not familiar with kolla I can comment on the ceph part.
>
> > The problem is, that OSD didn't stay at "cache" bucket, it still stay at
> > "default" bucket.
>
> I'm not sure how the deployment process with kolla works and what
> exactly is done here, but this might be caused by this option [1]:
>
> osd crush update on start
>
> Its default is "true". We ran into this some time ago and were
> wondering why the OSDs were in the wrong bucket everytime we restarted
> services. As I said, I don't know how exactly this would affect you,
> but you could set that config option to "false" and see if that still
> happens.
>
>
> Regards,
> Eugen
>
> [1] http://docs.ceph.com/docs/master/rados/operations/crush-map/
>
> Zitat von Eddie Yen <missile0407 at gmail.com>:
>
> > Hi,
> >
> > I'm using stable/rocky to try ceph cache tiering.
> > Now I'm facing a one issue.
> >
> > I chose one SSD to become cache tier disk. And set below options in
> > globals.yml.
> > ceph_enable_cache = "yes"
> > ceph_target_max_byte= "<size num>"
> > ceph_target_max_objects = "<object num>"
> > ceph_cache_mode = "writeback"
> >
> > And the default OSD type is bluestore.
> >
> >
> > It will bootstrap the cache disk and create another OSD container.
> > And also create the root bucket called "cache". then set the cache rule
> to
> > every cache pools.
> > The problem is, that OSD didn't stay at "cache" bucket, it still stay at
> > "default" bucket.
> > That caused the services can't access to the Ceph normally. Especially
> > deploying Gnocchi.
> >
> > When error occurred, I manually set that OSD to the cache bucket then
> > re-deploy, and everything is normal now.
> > But still a strange issue that it stay in the wrong bucket.
> >
> > Did I miss something during deployment? Or what can I do?
> >
> >
> > Many thanks,
> > Eddie.
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190702/964d0426/attachment-0001.html>