Gallery
About
Instead of giving LLM tools SSH access or installing them on a server, the following: promptctl ssh user@server makes a set of locally defined prompts "appear" within the remote shell as executable command line programs.For example: # on remote host analyze-config --help Usage: analyze-config [OPTIONS] --path <path> Prompt inputs: --all --path <path> --opt --syntax --sec would render and execute the following prompt: You are an expert sysadmin and security auditor analyzing the configuration file {{path}}, with contents: {{cat path}} Identify: {{#if (or all syntax) }}- Syntax Problems{{/if}} {{#if (or all sec) }}- Misconfigurations and security risks{{/if}} {{#if (or all opt) }}- Optimizations{{/if}} For each finding, state the setting, the impact, a fix, and a severity (Critical/Warning/Info). Nothing gets installed on the server, API keys never leave your computer, and you have full control over the context given to the LLM.Github: https://github.com/tgalal/promptcmd/Documentation: https://docs.promptcmd.sh/
Comments (0)
No comments yet. Be the first to comment!