![]() Normal Started 13m kubelet, Started container jicofo Normal Created 13m kubelet, Created container jicofo Normal Pulled 13m kubelet, Successfully pulled image “jitsi/jicofo:stable-4627-1” Normal Pulling 13m kubelet, Pulling image “jitsi/jicofo:stable-4627-1” Normal Scheduled 13m default-scheduler Successfully assigned jitsi/shard-0-jicofo-6c85888786-lkk9w to mvakert, janrenz Jan Renz, simoncolincap One small issue on Jicofo when deployment in Kubernetes. Nice Guys & great documentation to start with. server-template shard 0-5 _http._.cluster.local:80 check resolvers kube-dns init-addr none.# A records don't work here because their order might change between calls and would result in different.# _http._.cluster.local:80 is a SRV DNS record.stick-table type string len 128 size 2k expire 1d peers mypeers.This comment says that using SRV query instead of A, makes the order to be preserved: hpi-schul-cloud/jitsi-deployment/blob/caf93c6015913ad4acd7e89c268faad97bef012b/base/ops/loadbalancer/haproxy-configmap.yaml#L68 Have you ever seen a problem where each haproxy instance assigns different shard names when servers are resolved by DNS? Is there anything special done in your setup that preserves the order of shards in a DNS response? You might think of it as if you are trying to throw x265 videodata on HAProxy or I was trying a similar setup, but not exactly the same. I’m confused that you are still trying to use those where they can’t be applied, because they are totally different things. ![]() Thus HAProxy and ingress doesn’t apply here at all. k8s ingress object is for processing HTTP requests, which videostream data basically are not. That you can scale very easilly, we use it that way and it’s super easy to add new jvb instance (the only variable here is jvb nickname, which needs to be different for each jvb, so basically we use hostname and that’s it).ītw. So in your setup i would make all jvbs listen on one port (ie default 10000/UDP) and roll out vms with own ip for each jvb. Clients will basically connect directly to jvb exposed hostPort, jicofo will inform all clients to where to connect to, ie tells clients where are the jvb bridges (ip adress and port on which each jvb listens). Ingress is thus not needed, i think you should basically try to avoid any component not necessary for videostreams and make “path” between clients and jvb as short as possible in term of components. ![]() I would avoid running multiple jvb instances on same vms. We run each jvb pod on different vms, the adress of each jvb is vms’s ip address and port specified in hostPort. We run jitsi in kubernetes and we have configured jvb so the pods listen on predefined hostPort, from kubernets point of view, they’re not exposed to public via service. Unable to recognize “STDIN”: no matches for kind “ServiceMonitor” in version “ /v1” Unable to recognize “STDIN”: no matches for kind “PrometheusRule” in version “ /v1” ![]() Unable to recognize “STDIN”: no matches for kind “Prometheus” in version “ /v1” Unable to recognize “STDIN”: no matches for kind “PodMonitor” in version “ /v1” Unable to recognize “STDIN”: no matches for kind “Alertmanager” in version “ /v1” Unable to recognize “STDIN”: no matches for kind “DecoratorController” in version “ /v1alpha1” Unable to recognize “STDIN”: no matches for kind “Kibana” in version “ /v1” Unable to recognize “STDIN”: no matches for kind “Elasticsearch” in version “ /v1” Unable to recognize “STDIN”: no matches for kind “ClusterIssuer” in version “ cert-manager.io/v1alpha2” Unable to recognize “STDIN”: no matches for kind “Certificate” in version “ cert-manager.io/v1alpha2” That what I get when I try to run a development overlay I tried to run it on Mac with such versions of software:Ĭlient Version: version.Info]
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |