git.net

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: KVM NFS template image


I confirm bug...tmpl not removed...

And I confirm a possible solution:

"volumes" table, field "template_id" should be set to NULL for this
particular volume,  after volume restored from snapshot - on next storage
scavenger run it will be marked properly for GC and removed...
(volumes of VM deployed from ISO file, also have NULL for "template_id"
filed"







On Thu, 22 Nov 2018 at 22:29, ran huang <ran.huang134@xxxxxxxxx> wrote:

> Hi Andrija,
>
> That is precisely the step I went through.
>
> However the template was not cleaned up after expected interval when no
> other volume have it as a backing image.
>
> regards,
> Ran
> On 11/22/2018 12:53 PM, Andrija Panic wrote:
> > Hi Run,
> >
> > not sure what you mean  (I did not quite understand your explanation) -
> but
> > here is an exercise from my side (just done it):
> >
> > https://pasteboard.co/HOowNao.png
> >
> > Check the image - explanation below:
> >
> >
> > Centos55 minimal (builtin) template, spin new VM:
> > --- new volume created with UUID/PATH (name on NFS files
> > system): 021e8602-235b-4e0d-b9e4-04f0ff46399f
> > --it's backing file: backing file:
> >
> /mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/93682641-e7f6-11e8-8f64-089e01d943be
> >
> > Create snapshots via GUI - there is qcow2 snapshots created, whole
> snapshot
> > copied over (converted via qemu-img - "ps aux | grep qemu-img") tool to
> > Secondary NFS Storage - and snapshot REMOVED from original volume on
> > Primary NFS Storage (qemu-img snapshot -l
> > 021e8602-235b-4e0d-b9e4-04f0ff46399f shows zero snaps after ACS has
> > finished creating snapshots)
> > Volume still points to it's backing file - no changes so far (as
> expected)
> >
> > Then I restore volume from snapshots.
> > CloudStack will (my conclusions from the exercise), remove original
> volume
> > on NFS Primary Storage (021e8602-235b-4e0d-b9e4-04f0ff46399f), then it
> will
> > copy back (convert via qemu-img) a qcow2 file from Secondary Storage back
> > to Primary Storage - but it will use SAME NAME, so you again see
> > 021e8602-235b-4e0d-b9e4-04f0ff46399f on your NFS mount point.
> >
> > This time when you check the image with qemu-img info - it will show it
> has
> > NO backing file at all - since it's brand new volume/qcow2 image created
> > (as a copy fom Secondary Storage)
> >
> > that is how it works
> >
> > I assume the template will be again cleaned-up/removed from Primary
> Storage
> > if no other VMs/volume use it as it's backing (parent) image.
> >
> > Makes sense ?
> >
> > Cheers
> >
> > On Thu, 22 Nov 2018 at 21:18, ran huang <ran.huang134@xxxxxxxxx> wrote:
> >
> >> Thanks Andrija, just tested myself with expunge and works as expected.
> >>
> >> However, for KVM, when I revert a qcow disk from snapshot, which removes
> >> the backing chain to template, the template will not be removed.
> >>
> >> So it seems like despite the qcow disk is no longer backed by the
> >> template, the template will still consider the disk as its children in
> >> this case(revert from snapshot).
> >>
> >> regards,
> >> Ran
> >>
> >> On 11/19/2018 10:43 AM, Andrija Panic wrote:
> >>> new template, deployed new VM, destroyed VM (with Exunge option)...
> >>>
> >>> up to 120sec later... (storage.cleanup.interval=120 sec,  global config
> >>> option)
> >>>
> >>> 2018-11-19 19:35:59,525 DEBUGStorage pool garbage collector found 1
> >>> templates to clean up in storage pool: Primary-storage - NFS
> >>> 2018-11-19 19:35:59,525 DEBUG [c.c.s.StorageManagerImpl]
> >>> (StorageManager-Scavenger-1:ctx-2c88c8e0) (logid:040f4ad1) Storage pool
> >>> garbage collector has marked template with ID: 219 on pool 4 for
> garbage
> >>> collection
> >>>
> >>> Another  120sec later... (storage.cleanup.delay=120sec)
> >>>
> >>> 2018-11-19 19:37:59,598 DEBUG [c.c.s.StorageManagerImpl]
> >>> (StorageManager-Scavenger-2:ctx-f9dd338d) (logid:9ae40975) Storage pool
> >>> garbage collector found 1 templates to clean up in storage pool:
> >>> Primary-storage - NFS
> >>> ...
> >>> 2018-11-19 19:37:59,638 DEBUG [c.c.t.TemplateManagerImpl]
> >>> (StorageManager-Scavenger-2:ctx-f9dd338d) (logid:9ae40975) Evicting
> >>> TmplPool[37-219-4-563ea0f5-5164-4ac4-a183-728f418269b7]
> >>> 2018-11-19 19:37:59,643 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> >>> (StorageManager-Scavenger-2:ctx-f9dd338d) (logid:9ae40975)
> >>> getCommandHostDelegation: class
> >> com.cloud.agent.api.storage.DestroyCommand
> >>> ...
> >>> 2018-11-19 19:37:59,665 DEBUG [c.c.t.TemplateManagerImpl]
> >>> (StorageManager-Scavenger-2:ctx-f9dd338d) (logid:9ae40975) Successfully
> >>> evicted template andrija-test-tmpl from storage pool null
> >>>
> >>> template "andrija-test-tmpl" deleted...
> >>>
> >>> Hope that helps Run.
> >>>
> >>> Cheers
> >>> Andrija
> >>>
> >>> On Mon, 19 Nov 2018 at 19:11, Andrija Panic <andrija.panic@xxxxxxxxx>
> >> wrote:
> >>>> True (at least I'm sure for SolidFire) - but I believe in general also
> >>>> (will test this now...)
> >>>>
> >>>> On Mon, 19 Nov 2018 at 18:51, Dag Sonstebo <
> Dag.Sonstebo@xxxxxxxxxxxxx>
> >>>> wrote:
> >>>>
> >>>>> Developers please correct me... but as far as I remember there is a
> >>>>> garbage collector which does remove the templates from primary
> storage
> >> once
> >>>>> they are not needed (i.e. have no more "child VMs"). This is
> >> controlled by
> >>>>> the global setting "storage.template.cleanup.enabled".
> >>>>>
> >>>>> Regards,
> >>>>> Dag Sonstebo
> >>>>> Cloud Architect
> >>>>> ShapeBlue
> >>>>>
> >>>>>
> >>>>> On 16/11/2018, 22:51, "ran huang" <ran.huang134@xxxxxxxxx> wrote:
> >>>>>
> >>>>>       Hi Andrija,
> >>>>>
> >>>>>       Thanks for the clarification and quick response
> >>>>>
> >>>>>       regards,
> >>>>>       Ran
> >>>>>
> >>>>>       On 11/16/2018 02:15 PM, Andrija Panic wrote:
> >>>>>       > Hi Ran,
> >>>>>       >
> >>>>>       > templates stays on Primary Storage "forever", at least for
> NFS
> >>>>> (they are
> >>>>>       > moved from Secondary to Primary when you deploy a very first
> VM
> >> from
> >>>>>       > specific template). All VMs have this templates qcow2 as
> baking
> >>>>> (parent)
> >>>>>       > image.
> >>>>>       >
> >>>>>       > This template is a qcow2 copy of a file from Secondary
> Storage -
> >>>>> and is
> >>>>>       > considered a "parent" image, from which all child images  (VM
> >>>>> volumes) are
> >>>>>       > created - as you stated baking file (qcow linked clones, in
> >> official
> >>>>>       > terminology)
> >>>>>       >
> >>>>>       > you can have i.e. 100 VMs all linking (having it's backing
> >> file...)
> >>>>> to a
> >>>>>       > template qcow2 file.
> >>>>>       > So in other words, it's not supposed to be removed.
> >>>>>       >
> >>>>>       > Does this make sense?
> >>>>>       >
> >>>>>       > Cheers
> >>>>>       >
> >>>>>       >
> >>>>>       >
> >>>>>
> >>>>> Dag.Sonstebo@xxxxxxxxxxxxx
> >>>>> www.shapeblue.com
> >>>>> Amadeus House, Floral Street, London  WC2E 9DPUK
> >>>>> @shapeblue
> >>>>>
> >>>>>
> >>>>>
> >>>>>> On Fri, 16 Nov 2018 at 22:38, ran huang <ran.huang134@xxxxxxxxx>
> >> wrote:
> >>>>>       >
> >>>>>       >> Greetings All,
> >>>>>       >>
> >>>>>       >> For qcow2 format images, when creating a new VM in KVM, the
> >>>>> template
> >>>>>       >> image is copied from secondary storage to primary storage,
> and
> >> the
> >>>>> root
> >>>>>       >> volume image is created with the template image as a backing
> >> file.
> >>>>>       >>
> >>>>>       >> But when I break this backing chain on primary(expunge VM or
> >>>>> revert to a
> >>>>>       >> snapshot previously created on the root volume image), the
> >> template
> >>>>>       >> image is not deleted.
> >>>>>       >>
> >>>>>       >> Might I ask how is the template image going to be cleaned
> from
> >> the
> >>>>>       >> primary storage?
> >>>>>       >>
> >>>>>       >>
> >>>>>       >> addendum:
> >>>>>       >> CS ver 4.9.2 on CentOS 7.2
> >>>>>       >>
> >>>>>       >> regards,
> >>>>>       >> Ran
> >>>>>       >>
> >>>>>       >
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>> --
> >>>>
> >>>> Andrija Panić
> >>>>
> >>
>
>

-- 

Andrija Panić