[ocaml-infra] puppet

Sylvain Le Gall sylvain+ocaml at le-gall.net
Fri Dec 7 10:15:02 GMT 2012


Hi,

2012/12/6 Karl Ward <kw1213 at nyu.edu>

> Sorry for the late response, it has been a busy few days, plus my laptop
> is dying so I've been on borrowed computers while Apple refuses to
> acknowledge that my system has a problem.
>
> Puppet is great for system administration primarily because of the
> documentation aspect.  Most routine operations (creating users, setting
> passwords, installing software, starting services, configuring services)
> can be done in a Puppet manifest.  The major benefit is that the act of
> configuration becomes self-documenting.  You mentioned documentation of
> system configuration as a separate step--yes, documentation is easy, but
> it's usually the step that gets skipped.
>
> As for the need for a Puppet server, I agree that setting up a Puppet
> server is not how you want to spend your time.  However, you don't actually
> need a Puppet server at all.  Many large sites use Git or another repo
> system to store the Puppet manifests, and instead of contacting a server,
> each managed node looks at its own local copy of the Puppet manifests.
>  Each node periodically does a repo pull and keeps its own copy up to date.
>  The only central server involved is a repo, which you probably have
> anyway.  This practice is pretty common at very large Puppet sites (I've
> heard it is what Google uses, for instance).  We don't do this yet, but as
> soon as we have nodes on a public cloud we will.  Using Git to distribute
> Puppet manifests is described in one or more of the Puppet books, and a
> somewhat old post about it is online here:
> http://bitfieldconsulting.com/scaling-puppet-with-distributed-version-control
>
>
>
True, one recommended way of doing puppet configuration is to have a github
repository and pull it directly on the node (+cron job). That is quite easy
to setup.

Concerning the usage of puppet itself, I think it is worth the effort, if
you have more than 2 instances to setup. I think ocaml.org will be more
than one instance.

The benefits are not immediate but the long term is better.

Although, when dealing with configuration setup, I strongly recommend using
Augeas. Setup a private github repository (server configuration should not
be public) and we can start putting thing here. I'll probably start with
configuration of the forge.ocamlcore.org instances, just as an example...


> On Sat, Dec 1, 2012 at 7:47 PM, Ashish Agarwal <agarwal1975 at gmail.com>wrote:
>
>> Hi,
>>
>> I'm looping in our awesome sys admin Karl, who is our local puppet
>> master. Karl, not sure there is enough info below for you to give input,
>> but maybe you can ask questions or provide general advice.
>>
>> Just yesterday, Karl offered to do pretty much anything for the ocaml.orginfrastructure if it somehow involved him working on a Rasberry Pi cluster.
>> Anil, can you hook us up?
>>
>> -Ashish
>>
>>
>> On Sat, Dec 1, 2012 at 1:29 PM, Anil Madhavapeddy <anil at recoil.org>wrote:
>>>
>>> I've been playing around with Puppet this weekend at last, and I'm less
>>> convinced we really need it.  I'm putting the mail server in a VM running
>>> Postfix, and it doesn't seem very necessary to have all the complexity of
>>> Puppet itself when each of the services is essentially running just a
>>> single daemon (email or web or sync, etc).
>>>
>>> So I'm inclined to revert back to the usual XenServer way.  Create a
>>> Wheezy VM, add a dssh key to regularly apt-get update on all of them, and
>>> create clones in XenServer for each of the services.  This is pretty easy
>>> to back up and document.  We can still host your Puppet in a VM too, but
>>> not for the really important services like e-mail, which I'd prefer to
>>> keep configured in a simpler way.
>>>
>>> Thoughts?
>>>
>>> -anil
>>>
>>
>>
>
>
> --
> Karl Ward
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ocaml.org/pipermail/infrastructure/attachments/20121207/baaf8811/attachment.html>


More information about the Infrastructure mailing list